MATHEMATICS, PHILOSOPHY, SCIENCE

To Infinity…and Beyond!

In researching Gödel’s Incompleteness Theorem, I stumbled upon an article that stated no one has proven a line can extend infinitely in both directions. This is shocking, if it’s true, and after a quick Google search, I couldn’t seem to find anything that contradicts the claim. So, in the spirit of intellectual adventure, I’ll offer a fun proof-esque idea here.

Consider a line segment of length \ell that is measured in some standard unit of distance/length (e.g., inches, miles, nanometers, etc.). We convert the length of \ell—whatever units and length we’ve chosen (say, 0.5298 meters)—into a fictitious unit of measurement we’ll call Hoppes (hpe) [pronounced HOP-ease]. So, now, one should consider the length of \ell to be 2 hpe such that \ell/2 = 1 hpe. We then add some fraction (of the length of) \ell to (both ends of) itself, and let’s say the fraction of \ell we’ll use, call it a, is 3\ell/4, which equals 3/2 hpe. The process by which we will add a to \ell will be governed by the following geometric series: 

s_n(a) = 1+a+a^2+a^3+\dots+a^{n-1} = (1-a^n)(1-a)^{-1}=\frac{a^n-1}{a-1}.

Let us add the terms of s_n(a) to both sides of \ell; first, we add 1 hpe to both sides (\ell=4 hpe), then 3/2 hpe (\ell=7 hpe, then 9/4 hpe (\ell=23/2 hpe)and so forth. If we keep adding to \ell units of hpe based on the series s_n(a), then we’re guaranteed a line that extends infinitely in both directions because \lim_{n\rightarrow\infty} (a^{n}-1)(a-1)^{-1} = \infty when \vert a\vert \geq 1.

Now, suppose we assume it is impossible to extend our line segment infinitely in both directions. Then s_n(a) must converge to (1-a)^{-1}, giving us a total length of 2+(1-a)^{-1} hpe for \ell, because \lim_{n\rightarrow\infty} 1-a^{n}=1, which is only possible when \vert a\vert < 1. (We cannot have a negative length, so a\in \text{R}^+_0.) But this contradicts our \vert a\vert value of 3/2 hpe above, which means the series s_n(a) is divergent. Q.E.D.

N.B. Some might raise the “problem” of an infinite number of discrete points that composes a line (segment), recalling the philosophical thorniness of Zeno’s (dichotomy) paradox; this is resolved, however, by similarly invoking the concept of limits (and is confirmed by our experience of traversing complete distances!):

\sum_{i=1}^{\infty} (1/2)^i=\frac{1}{2}\sum_{i=0}^{\infty} (1/2)^i=\frac{1}{2} s_n (\frac{1}{2})=\frac{1}{2}\Big( 1+\frac{1}{2}+(\frac{1}{2})^2+\cdots\Big)=\frac{1}{2}\Big(\frac{1}{1-\frac{1}{2}}\Big) = 1,

a single unit we can set equal to our initial line segment \ell with length 2 hpe.


Special thanks to my great friend, Tim Hoppe, for giving me permission to use his name as an abstract unit of measurement.

Advertisements
Standard
MATHEMATICS, RELIGION, SCIENCE

A Proposed Proof for the Existence of God

Assume it is impossible to prove God does not exist. Then the probability that God exists, p(\text{G}), however minuscule, is greater than zero—that is, p(\text{G}) = 1/g \in (0,1). Also assume, as many important physicists and cosmologists do, that (1) the multiverse exists and is composed of an infinite number of independent universes and (2) our current universe is but one of those infinite universes existing in the multiverse.*

If the probability of the non-existence of God, denoted p(-\text{G}), in some universe is defined as

p(-\text{G}) = (1 - g^{-1})

then as the number of universes (n) approaches infinity,

\lim_{n \rightarrow \infty} (1 - g^{-1})^n = 0.

That is, any event that can happen will ineluctably happen given enough trials. This means God must exist in at least one universe within the multiverse, and if He does, then He must exist in all universes, including our universe, because omnipresence is a necessary condition for God to exist.

Q.E.D.

* This is certainly a reasonable, if not ubiquitously held, concept that follows from the mathematics of inflationary theory. In Our Mathematical Universe, for example, Max Tegmark suggests if “inflation…made our space infinite[, t]hen there are infinitely many Level I parallel universes” (121).

Standard
MATHEMATICS, PHILOSOPHY, RELIGION, Sociology

The Myth of Altruism

The American Heritage Dictionary (2011) defines “altruism” as “selflessness.” If one accepts that standard definition, then it seems reasonable to view an “altruistic act” as one that fails to produce a net gain in personal benefit for the actor subsequent to its completion. (Here, we privilege psychological altruism as opposed to biological altruism, which is often dismissed by the “selfish gene” theory of Darwinian selection and notions of reproductive fitness.) Most people, however, assume psychologically-based altruistic acts exist because they believe an act that does not demand or expect overt reciprocity or recognition by the recipient (or others) is so defined. But is this view sufficiently comprehensive, and is it really possible to behave toward others in a way that is completely devoid of self? Is self-interest an ineluctable process with respect to volitional acts of kindness? Here, we explore the likelihood of engaging in an authentically selfless act and capturing true altruism, in general. (Note: For those averse to mathematical jargon, feel free to skip to the paragraph that begins with “[A]t this stage” to get a basic understanding of orthogonality and then move to the next section, “Semantic States as ‘Intrinsic Desires’,” without losing much traction.)

The Model

Imagine for a moment every potential (positive) outcome that could emerge as a result of performing some act—say, holding the door for an elderly person. You might receive a “thank you,” a smile from an approving onlooker, someone reciprocating in kind, a feeling you’ve done what your parents (or your religious upbringing) might have expected you to do, perhaps even a monetary reward—whatever. (Note: We assume there will never be an eager desire or expectation for negative consequences, so we require all outcomes to be positive, beneficial events. Of course, a comprehensive model would also include the desire to avoid negative consequences—the ignominy of failing to return a wallet or aiding a helpless animal (an example we will revisit later)—but these can be transformed into positive statements that avoid the unnecessary complications associated with the contrapositive form.)

We suppose there are n outcomes, and we can imagine each outcome enjoys a certain probability of occurring. We will call this the potential vector \mathbf{p}, the components of which are simply the probabilities that each outcome (ordered 1 through n) will occur:

\mathbf{p} = [p(1), p(2), p(3),\dots,p(n-1),p(n)]

and 0\leq p(i)\leq 1 where \sum_{i=1}^n p(i) does not have to equal 1 because events are independent and more than a single outcome is possible. (You might, for example, receive both a “thank you” and a dollar bill for holding the door for an elderly woman.) So, the vector \mathbf{p} represents the agglomeration of the discrete probabilities of every positive thing that could occur to one’s benefit by engaging in the act.

Consider, now, another vector, \mathbf{q}, that represents the constellation of desires and expectations for the possible outcomes enumerated in \mathbf{p}. That is, if \mathbf{q} = [q(1),q(2),q(3),\dots,q(n-1),q(n)], then q(i) catalogs the interest and desire in outcome p(i). (It might be convenient to imagine \mathbf{q} as a binary vector of length n and an element of \text{R}_2^n, but we will be better to treat \mathbf{q} vectors as a subset of the parent vector space \text{R}^n to which \mathbf{p} belongs.) In other words, q(i) = 0,1: either you desire the outcome (whose probability is denoted by) p(i) or you don’t. (There are no “probabilities of expectation or desire” in our model.) We will soon see how these vectors address our larger problem of quantifying acts of altruism.

The point \text{Q} in \text{R}^n is determined by \mathbf{q}, and we want to establish a plane parallel to (and including) \mathbf{q} with normal vector \mathbf{p}. Define a point X generated by a vector \mathbf{x} = t\mathbf{q} where the scalar t>1 and \mathbf{x} = [c_1,c_2,c_3,\dots,c_{n-1},c_n]. If \mathbf{p} is a normal vector of \mathbf{x} - \mathbf{q}, then the normal-form equation of the plane is given by \mathbf{p}\cdot(\mathbf{x} - \mathbf{q})=0, and its general equation is

\sum_{i=1}^n p(i)c_i = p(1)c_1 + p(2)c_2 + \dots + p(n-1)c_{n-1} + p(n)c_n=0.

We now have a foundation upon which to establish a basic, quantifiable metric for altruism. If we assume, as we did above, that an altruistic act benefits the recipient and fails to generate any positive benefits for the actor, then such an act must involve potential and expectation vectors whose scalar product equals zero, which means they stand in an orthogonal (i.e., right-angle) relationship to each other. It is interesting to note there are only two possible avenues for \mathbf{p}\mathbf{q} orthogonality within our model: (a) the actor desires and/or expects absolutely no rewards (i.e., \mathbf{q}=0), which is the singular and generally understood notion of altruism, and (b) the actor only desires and/or expects rewards that are simply impossible (i.e., p(i)=0 where q(i)=1). (We will assume \mathbf{p}\neq0.) In all other cases, the scalar product will be greater than zero, violating the altruism requirement that there be no benefit to the actor. Framed another way, (the vector of) an altruistic act forms part of a basis for a subspace in \text{R}^n.

At this stage, it might be beneficial to pause and walk through a very easy example. Imagine there are only three possible outcomes for buying someone their morning coffee at Starbucks: (1) the recipient says “thank you,” (2) someone buys your coffee for you (“paying it forward”), and (3) the person offers to pay your mortgage. A reasonable potential vector might be [0.9, 0.5, 0]—i.e., there’s a 90% chance you’ll get a “thank you,” a 50% chance someone else will buy your coffee for you, and a zero-percent chance this person will pay your mortgage. Now, assume your expectation vector for those outcomes is [1, 0, 0]—you expect people to say “thank you” when someone does something nice for them, but you don’t expect someone to buy your coffee or pay your mortgage as a result. The scalar product is greater than zero (0.9(1) + 0.5(0) + 0^2 = 0.9), which means the act of buying the coffee fails to meet the requirement for altruism (i.e., the potential vector is not orthogonal to the plane that includes Q and X = tq). In this example, as we’ve seen in the general case, the only way buying the coffee could have been an altruistic act is if (a) the actor expects or desires no outcome at all or (b) the actor expected or desired her mortgage to be paid (and nothing else). We will discuss later the reasonableness of the former scenario. (It might also be interesting to note the model can quantify the degree to which an act is altruistic.)

The above formalism will work in every case where there is a single, fixed potential vector and a specified constellation of expectations; curious readers, however, might be interested in cases where there exists a non-scalar-multiple range of expectations (i.e., when X =\mathbf{x}\neq t\mathbf{q} for some scalar t), and we can dispatch the formalism fairly quickly. In these cases, orthogonality would involve a specific potential vector and a plane involving the displacement of expectation vectors. The vector form of this plane is \mathbf{x}=\mathbf{q} + t_1\mathbf{u} + t_2\mathbf{v}, and direction vectors \mathbf{u},\mathbf{v} are defined as follows:

\mathbf{u}=\overrightarrow{QS}=[s(1)-q(1),s(2)-q(2),\ldots,s(n-1)-q(n-1),s(n)-q(n)]

with \mathbf{v} defined similarly for points Q and R; t_i are scalars (possibly understood as time per some unit of measurement for a transition vector), and points S and R of the direction vectors are necessarily located on the plane in question. Unpacking the vector form of the equation yields the following matrix equation:

\begin{bmatrix}c_1\\c_2\\c_3\\ \vdots\\c_{n-1}\\c_n\end{bmatrix}=\begin{bmatrix}q(1)\\q(2)\\q(3)\\ \vdots\\q(n-1)\\q(n)\end{bmatrix}+t_1\begin{bmatrix}s(1)-q(1)\\s(2)-q(2)\\s(3)-q(3)\\ \vdots\\s(n-1)-p(n-1)\\s(n)-p(n)\end{bmatrix}+t_2\begin{bmatrix}r(1)-q(1)\\r(2)-q(2)\\r(3)-q(3)\\ \vdots\\r(n-1)-p(n-1)\\r(n)-p(n)\end{bmatrix}

whose parametric equations are

\begin{matrix}c_1=q(1)+t_1[s(1)-p(1)]+t_2[r(1)-p(1)]\\ \vdots\\ c_n=q(n)+t_1[s(n)-p(n)]+t_2[r(n)-p(n)].\end{matrix}

It’s not at all clear how one might interpret “altruistic orthogonality” between a potential vector and a transition or range (i.e., subtraction) vector of expectations within this alternate plane, but it will be enough for now to consider its normal vectors—one at Q and, if we wish, one at X (through the appropriate mathematical adjustments)—as secondary altruistic events orthogonal to the relevant plane intersections:

p_1(1)c_1 - p_2(1)c_1 + p_1(2)c_2 - p_2(2)c_2 + \dots + p_1(n)c_n - p_2(n)c_n = 0.

Semantic States as ‘Intrinsic Desires’

To this point, we’ve established a very simple mathematical model that allows us to quantify a notion of altruism, but even this model hinges on the likelihood that one’s expectation vector equals zero: an actor neither expects nor desires any outcome or benefit from engaging in the act. This seems plausible for events we can recognize and catalog (e.g., reciprocal acts of kindness, expressions of affirmation, etc.), but what about the internal motivations—philosophers refer to these as intrinsic desires—that very often drive our decision-making process? What can we say about acts that resonate with these subjective, internal motivations like religious upbringing, a generic sense of rectitude, cultural conditioning, or the Golden Rule? These intrinsic desires must also be included in the collection of benefits we might expect to gain from engaging in an act and, thus, must be included in the set of components of potential outcomes. If you’ve been following the above mathematical discussion, such internal states guarantee non-orthogonality; that is, they secure a scalar for \mathbf{p}\cdot\mathbf{q} because p_k,q_k >0 for some internal state k. This means internal states belie a genuine act of altruism. It is important to note, too, these acts are closely associated with notions of social exchange theory, where (1) “assets” and “liabilities” are not necessarily objective, quantifiable things (e.g., wealth, beauty, education, etc.) and (2) one’s decisions often work toward shrinking the gap between the perceived self and ideal self. (See, particularly, Murstein, 1971.) In considering the context of altruism, internal states combine these exchange features: An act that aligns with some intrinsic desire will bring the actor closer to the vision of his or her ideal self, which, in turn, will be subjectively perceived and experienced as an asset. Altruism is perforce banished in the process.

So, the question then becomes: Is it possible to act in a way that is completely devoid of both a desire for external rewards and any motivation involving intrinsic desires, internal states that provide (what we will conveniently call) semantic assets? As I hope I’ve shown, yes, it is (mathematically) possible—and in light of that, then, I might have been better served placing quotes around the word myth in the title—but we must also ask ourselves the following question: How likely it is that an act would be genuinely altruistic given our model? If we imagine secondary (non-scalar) planes P_1, P_2,\dots, P_n composed of expectation vectors from arbitrary points p_1,p_2,\dots,p_n (with p_j \in P_j) parallel to the x-axis, as described above, then it is easy to see there are a countably infinite number of planes orthogonal to the relevant potential vector. (Assume q\neq 0 because if q is the zero vector, it is orthogonal to every plane.) But there are an (uncountably) infinite number of angles 0<\theta<\pi and \theta\neq\pi/2, which means there exists a far greater number of planes that are non-orthogonal to a given potential vector, but this only considers \theta rotations in \text{R}^2 as a two-dimensional slice of our outcome space \text{R}^n. As you might be able to visualize, the number of non-orthogonal planes grows considerably if we include \theta rotations in \text{R}^3. Within the context of three dimensions, and to get a general sense of the unlikelihood of acquiring random orthogonality, suppose there exists a secondary plane, as described above, for every integer-based value of 0<\theta<\pi (and \theta\neq\pi/2) with rotations in \text{R}^2; then the probability of a potential vector being orthogonal to a randomly chosen plane P_j of independent expectation vectors is highly improbable: p = 1/178 = 0.00561797753, a value significant to eleven digits. If we include \text{R}^3 rotations to those already permitted, the p-value for random orthogonality decreases to 0.00001564896, which is a value so small as to be essentially nonexistent. So, although altruism is theoretically possible because our model admits the potential for orthogonality, our model also suggests such acts are quite unlikely, especially for large n. For philosophically sophisticated readers, the model supports the theory of psychological altruism (henceforth ‘PA’) that informs the vast majority of decisions we make in response to others, but based on p-values associated with the prescribed model, I would argue we’re probably closer to Thomas Hobbes’s understanding of psychological egoism (henceforth ‘PE’), even though the admission of orthogonality subverts the totalitarianism and inflexibility inherent within PE.

One final thought explicates the obvious problem with our discussion to this point: There isn’t any way to quantify probabilities of potential outcomes based on events that haven’t yet happened, even though we know intuitively such probabilities, outcomes, and expectations exist. To be sure, the concept of altruism is palpably more philosophical or psychological or Darwinian than mathematical, but our model is successful in its attempt to provide a skeletal structure to a set of disembodied, intrinsic desires—to posit our choices are, far more often than they are not, means to ends (whether external or internal) rather than selfless, other-directed ends in themselves.

Some Philosophical Criticisms

Philosophical inquiry concerning altruism is rich and varied. Aristotle believed the concept of altruism—the specific word was not coined until 1851 by Auguste Comte—was an outward-directed moral good that benefited oneself, the benefits accruing in proportion to the number of acts committed. Epicurus argued that selfless acts should be directed toward friends, yet he viewed friendship as the “greatest means of attaining pleasure.” Kant held for acts that belied self-interest but argued, curiously, they could also emerge from a sense of duty and obligation. Thomas Hobbes rejected the notion of altruism altogether; for him, every act is pregnant with self-interest, and the notion of selflessness is an unnatural one. Nietzsche felt altruistic acts were degrading to the self and sabotaged each person’s obligation to pursue self-improvement and enlightenment. Emmanuel Levinas argued individuals are not ends in themselves and that our priority should be (and can only be!) acting benevolently and selflessly towards others—an argument that fails to address the conflict inherent in engaging with a social contract where each individual is also a receiving “other.” (This is the problem with utilitarian-based approaches to altruism, in general.) Despite the varied historical analyses, nearly every modern philosopher (according to most accounts) rejects the notion of psychological egoism—the notion that every act is driven by benefits to self—and accepts, as our model admits, that altruism does motivate a certain number of volitional acts. But because our model suggests very low p-values for PA, it seems prudent to address some of the specific arguments against a prevalent, if not unshirted, egoism.

1. Taking the blue pill: Testing for ‘I-desires’

Consider the following story:

Mr. Lincoln once remarked to a fellow passenger…that all men were prompted by selfishness in doing good. His [companion] was antagonizing this position when they were passing over a corduroy bridge that spanned a slough. As they crossed this bridge they espied an old razor-backed sow on the bank making a terrible noise because her pigs had got into the slough and were in danger of drowning. [M]r. Lincoln called out, ‘Driver can’t you stop just a moment?’ Then Mr. Lincoln jumped out, ran back and lifted the little pigs out of the mud….When he returned, his companion remarked: ‘Now Abe, where does selfishness come in on this little episode?’ ‘Why, bless your soul, Ed, that was the very essence of selfishness. I should have had no peace of mind all day had I gone on and left that suffering old sow worrying over those pigs.’ [Feinberg, Psychological Altruism]

The author continues:

What is the content of his desire? Feinberg thinks he must really desire the well-being of the pigs; it is incoherent to think otherwise. But that doesn’t seem right. Feinberg says that he is not indifferent to them, and of course, that is right, since he is moved by their plight. But it could be that he desires to help them simply because their suffering causes him to feel uncomfortable (there is a brute causal connection) and the only way he has to relieve this discomfort is to help them. Then he would, at bottom be moved by an I-desire (‘I desire that I no longer feel uncomfortable’), and the desire would be egoistic. Here is a test to see whether the desire is basically an I-desire. Suppose that he could simply have taken a pill that quietened the worry, and so stopped him being uncomfortable, and taking the pill would have been easier than helping the pigs. Would he have taken the pill and left the pigs to their fate? If so, the desire is indeed an I-desire. There is nothing incoherent about this….We can apply similar tests generally. Whenever it is suggested that an apparently altruistic motivation is really egoistic, since it [is] underpinned by an I-desire, imagine a way in which the I-desire could be satisfied without the apparently altruistic desire being satisfied. Would the agent be happy with this? If they would, then it is indeed an egoistic desire. if not, it isn’t.

This is a powerful argument. If one could take a pill—say, a tranquilizer—that would relieve the actor from the discomfort of engaging the pigs’ distress, which is the assumed motivation for saving the pigs according to the (apocryphal?) anecdote, then the volitional act of getting out of the coach and saving the pigs must then be considered a genuinely altruistic act because it is directed toward the welfare of the pigs and is, by definition, not an “I-desire.” But this analysis makes two very large assumptions: (1) there is a singular motivation behind an act and (2) we can whisk away a proposed motivation by some physical or mystical means. To be sure, there could be more than one operative motivation for an action—say, avoiding discomfort and receiving a psychosocial reward—and the thought-experiment of a pill removing the impetus to act does not apply in all cases. Suppose, for example, one only desires to avoid the pigs’ death and not the precursor of their suffering. Is it meaningful to imagine the possibility of a magical pill that could avoid the pigs’ death? If by the “pill test” we intend to eviscerate any and all possible motivations by some fantastic means, then we really haven’t said much at all. We’ve only argued the obvious tautology: that things would be different if things were different. (Note: the conditional A –> A is always true, which means A <–> A is, too.) Could we, for example, apply this test to our earlier coffee experiment? Imagine our protagonist could take a pill that would, by acting on neurochemical transmitters, magically satisfy her expectation and desire for being thanked for purchasing the coffee. Can we really say her motivation is now altruistic, presumably because the pill has rendered an objective “thank you” from the recipient unnecessary? In terms of our mathematical model, does the pill create a zero expectation vector? It’s quite difficult to imagine this is the case; the motivation—that is, the expectation of, and desire for, a “thank you”—is not eliminated because it is fulfilled by a different mechanism.


2. Primary object vs. Secondary possessor

As a doctor who desires to cure my patient, I do not desire pleasure; I desire that my patient be made better. In other words, as a doctor, not all my particular desires have as their object some facet of myself; my desire for the well-being of my patient does not aim at alteration in myself but in another. My desire is other-regarding; its object is external to myself. Of course, pleasure may arise from my satisfied desire in such cases, though equally it may not; but my desire is not aimed at my own pleasure. The same is true of happiness or interest: my satisfied desire may make me happy or further my interest, but these are not the objects of my desire. Here, [Joseph] Butler simply notices that desires have possessors – those whose desires they are – and if satisfied desires produce happiness, their possessors experience it. The object of a desire can thus be distinguished from the possessor of the desire: if, as a doctor, my desire is satisfied, I may be made happy as a result; but neither happiness nor any other state of myself is the object of my desire. That object is other-regarding, my patient’s well-being. Without some more sophisticated account, psychological egoism is false. [See Butler, J. (1726) Fifteen Sermons Preached at the Rolls Chapel, London]

Here, the author errs not in assuming pleasure can be a residual feature of helping his patients—it can be—but in presuming his desire for the well-being of others is a first cause. It is likely that such a desire originates from a desire to fulfill the Hippocratic oath, to avoid imposing harm, which demands professional and moral commitments from a good physician. The desire to be (seen as) a good physician, which requires a (“contrapositive”) desire to avoid harming patients, is clearly a motivation directed toward self. Receiving a “thank you” for buying someone’s coffee might create a feeling of pleasure within the actor (in response to the pleasure felt and/or exhibited by the recipient), but the pleasure of the recipient is not necessarily (and is unlikely to be) a first cause. If it were a first (and only) cause, then all the components of the expectation vector would be zero and the act would be considered altruistic. Notice we must qualify that if-then statement with the word “only” because our model treats such secondary “I-desires” as unique components of the expectation vector. (“Do I desire the feeling of pleasure that will result in pleasing someone else when I buy him or her coffee?”) We will set aside the notion that an expectation of a residual pleasurable feeling in response to another’s pleasure is not necessarily an intrinsic desire. I can expect to feel good in response to doing X without desiring, or being motivated by, that feeling—this is the heart of the author’s argument—but if any part of the motivation for buying the coffee involves a desire to receive pleasure—even if the first cause involves a desire for the pleasure of others—then the act cannot truly be cataloged as altruistic because, as mentioned above, it must occupy a component within q. The issue of desire, then, requires an investigation into first causes (i.e., “ultimate”) motivations, and the logical fallacy of Joseph Butler’s argument (against what is actually psychological hedonism) demands it.


3. Sacrifice or pain

Also taken from the above link:

A simple argument against psychological egoism is that it seems obviously false….Hume rhetorically asks, ‘What interest can a fond mother have in view, who loses her health by assiduous attendance on her sick child, and afterwards [sic] languishes and dies of grief, when freed, by its death, from the slavery of that attendance?’ Building on this observation, Hume takes the ‘most obvious objection’ to psychological egoism.[A]s it is contrary to common feeling and our most unprejudiced notions, there is required the highest stretch of philosophy to establish so extraordinary a paradox. To the most careless observer there appear to be such dispositions as benevolence and generosity; such affections as love, friendship, compassion, gratitude. […] And as this is the obvious appearance of things, it must be admitted, till some hypothesis be discovered, which by penetrating deeper into human nature, may prove the former affections to be nothing but modifications of the latter. Here Hume is offering a burden-shifting argument.  The idea is that psychological egoism is implausible on its face, offering strained accounts of apparently altruistic actions. So the burden of proof is on the egoist to show us why we should believe the view.

Sociologist Emile Durkheim argued that altruism involves voluntary acts of “self-destruction for no personal benefit,” and like Levinas, Durkheim believed selflessness was informed by a utilitarian morality despite his belief that duty, obligation, and obedience to authority were also counted among selfless acts. The notion of sacrifice is perhaps the most convincing counterpoint to overriding claims to egoism. It is difficult to imagine a scenario, all things being equal, where sacrifice (and especially pain) would be a desired outcome. It would seem that a decision to act in the face of personal sacrifice, loss, or physical pain would almost certainly guarantee a genuine expression of altruism, yet we must again confront the issue of first causes. In the case of the assiduous mother, sacrifice might service an intrinsic (and “ultimate”) desire to be considered a good mother. In the context of social-exchange theory, the asset of being (perceived as) a good mother outweighs the liability inherent within self-sacrifice. Sacrifice, after all, is what good mothers do, and being a good mother resonates more closely with the ideal self, as well as society’s coeval definition of what it means to be a “good mother.” In a desire to “do the right thing” and “be a good mother,” then, she chooses sacrifice. It is the desire for rectitude (perceived or real) and the positive perception of one’s approach to motherhood, not solely the sacrifice itself, that becomes the galvanizing force behind the act. First causes very often answer the following question: “What would a good [insert category or group to which membership is desired] do?”

What of pain? We can imagine a scenario in which a captured soldier is being tortured in the hope he or she will reveal critical military secrets. Is the soldier acting altruistically by enduring intense pain rather than revealing the desired secrets? We can’t say it is impossible, but, here, the aegis of a first cause likely revolves around pride or honor; to use our interrogative test for first causes: “Remaining true to a superordinate code is what [respected and honorable soldiers] do.” They certainly don’t dishonor themselves by betraying others, even when it’s in one’s best interest to do so. Recalling Durkheim’s definition, obedience (as distinct from the obligatory notion of duty) also plays an active role here: Honorable soldiers are required to obey the established military code of conduct, so the choice to endure pain might be motivated by a desire to be (seen as) an obedient and compliant soldier who respects the code rather than (merely) an honorable person, though these two things are nearly inextricably enmeshed. To highlight a relevant religious example, Jesus’ sacrifice on the cross might not be considered a truly altruistic act if the then-operative value metric privileged a desire to be viewed by the Father as a good, obedient Son, who was willing to sacrifice Himself for humanity, above the sacrifice (and pain) associated with the crucifixion. (This is an example where the general criticism of Durkheim’s “utilitarian” altruism fails; Jesus did not receive from His utilitarian sacrifice in the way mankind did.) These are complex motivations that require careful parsing, but there’s one thing we do know: If neither sacrifice nor pain can be related to any sort of intrinsic desire that satisfies the above interrogative test, then it probably should be classified as altruistic, even though, as our model suggests, this is not likely to be the case.


4. Self-awareness

Given the arguments, it is still unclear why we should consider psychological egoism to be obviously untrue.  One might appeal to introspection or common sense, but neither is particularly powerful. First, the consensus among psychologists is that a great number of our mental states, even our motives, are not accessible to consciousness or cannot reliably be reported…through the use of introspection. While introspection, to some extent, may be a decent source of knowledge of our own minds, it is fairly suspect to reject an empirical claim about potentially unconscious motivations….Second, shifting the burden of proof based on common sense is rather limited. Sober and Wilson…go so far as to say that we have ‘no business taking common sense at face value’ in the context of an empirical hypothesis. Even if we disagree with their claim and allow a larger role for shifting burdens of proof via common sense, it still may have limited use, especially when the common sense view might be reasonably cast as supporting either position in the egoism-altruism debate.  Here, instead of appeals to common sense, it would be of greater use to employ more secure philosophical arguments and rigorous empirical evidence.

In other words, we cannot trust thought processes in evaluating our motivations to act. We might think we’re acting altruistically—without any expectations or desires—but we are often mistaken because, as our earlier examples have shown, we fail to appreciate the locus of first causes. (It is also probably true, for better or worse, that most people prefer to think of themselves more highly than they ought—a process that better approaches exchange ideas of the ideal self in choosing how and when to act.) Jeff Schloss, the T.B. Walker Chair of Natural and Behavioral Sciences at Westmont College, suggests precisely this when he states that “people can really intend to act without conscious expectation of return, but that [things like intrinsic desires] could still be motivating certain actions.” The interrogative test seems like one easy way to clarify our subjective intuitions surrounding what motivates our actions, but we need more tools. Our model seems to argue that the burden of proof for altruism rests with the actor—“proving,” without resorting to introspection, one’s expectation vector really is zero—rather than “proving” the opposite, that egoism is the standard construct. Our proposed p-values based on the mathematics of our model strongly suggest the unlikelihood of a genuine altruism for a random act (especially for large n), but despite the highly suggestive nature of the probability values, it is unlikely they rise to the level of “empirical evidence.”


Conclusion

Though I’ve done a little work in a fun attempt to convince you genuine altruism is a rather rare occurrence, generally speaking, it should be said that even if my basic conceit is accurate, this is not a bad thing! The “intrinsic desires” and (internal) social exchanges that often motivate our decision-making process (1) lead to an increase in the number of desirable behaviors and (2) afford us an opportunity to better align our actions (and ourselves) with a subjective vision of an “ideal self.” We should note, too, the “subjective ideal self” is frequently a reflection of an “objective ideal ([of] self)” constructed and maintained by coeval social constructs. This is a positive outcome, for if we only acted in accordance with genuine altruism, there would be a tragic contraction of good (acts) in the world. Choosing to act kindly toward others based on a private desire that references and reinforces self in a highly abstract way stands as a testament to the evolutionary psychosocial sophistication of humans, and it evinces the kind of higher-order thinking required to assimilate into, and function within, the complex interpersonal dynamic demanded by modern society. We should consider such sophistication to be a moral and ethical victory rather than the evidence of some degenerate social contract surreptitiously pursued by selfish persons.


References:

Bernard Murstein (Ed.). (1971). Theories of Attraction and Love. New York, NY: Springer Publishing Company, Inc.

Standard
MATHEMATICS

Dr. Who, Fibonacci’s Rabbits, and the Wasp Apocalypse (Updated)

Chapter XII from Fibonacci’s Liber abaci describes the following scenario:

A certain man put a pair of rabbits in a place surrounded on all sides by a wall. How many pairs of rabbits can be produced from that pair in a year if it is supposed that every month each pair begets a new pair which from the second month on becomes productive?

Solving this riddle, of course, yields the famous Fibonacci sequence:

\text{F}_n = 1,1,2,3,5,8,13,21,34,55,89,144,223,377,610,987,1597,2584,4181,6765...

where the nth term in the sequence is the sum of the previous two terms (n-1 and n-2). That is, \text{F}_n = \text{F}_{n-1} + \text{F}_{n-2} where n > 2.

What is much less well known is that a renegade group of rabbits escaped this enclosure and were later captured after trying to overrun a garden owned by a resident in a nearby town. These rabbits, they discovered, have a unique physiology: they procreate at a much faster rate. We’ll call this the “renegade sequence”:

\text{R}_n = 1,2,4,6,15,18,46,50,115,120,....

 

Desmos_Graphing_Calculator

\text{Figure 1:} g(x) = 0.7131e^{0.5477x}

In the original storyline for “The Ark in Space” (1975), Dr. Who battles the Wirrn, “a wasp creature [that lays] its eggs inside cryo-preserved humans”; it just so happens the Wirrn’s reproductive pattern follows \text{R}_n (but at a much faster rate—in thousands of births per hour), and after traveling back to the twelfth century to recover the rabbits in order to study their anomalous physiology, Dr. Who tries unsuccessfully to unravel the sequence, a necessary step toward predicting the exact date the Wirrns will take over the earth while also determining whether a proposed vaccine is guaranteed to work fast enough to avert human extinction. Unfortunately, everyone dies (including Dr. Who, whose regenerative powers are neutralized by a wasp-like venom), the invading creatures repopulate the earth, and the series ends. (Cooler heads prevailed, though, and the story was rewritten.)

Figure 1 shows \text{R}_n as a function of time (in n hours) plotted with the regression line (red) defined in the caption (R^2 = 0.9829). Before he received the fatal sting, however, Dr. Who realized the renegade sequence can be derived from an underlying sequence, call it \text{S}_n, defined by \text{R}_n = \sum_{i=1}^n \text{S}_{i}, but he was unable to define an equation that calculates \text{R}_n precisely, one that predicts the total growth of Wirrns for any hour n after the initial infection.

Challenge: Find an equation that calculates any n in \text{R}_n by defining a function f : \text{Z}^+_0\to \text{Z} that generates \text{S}_n.

(I developed this sequence from my work on stock-market trends. I’ll post the answer in an update TBD.)

Spoiler Alert

The answer, based on \text{S}_n, is

\text{R}_n = \sum_{i=1}^n \lceil{\frac{x_i}{2}}\rceil^{3-2\bar{x_i}} + 0^{\bar{x_i}}

where \bar{x} \equiv x\pmod{2}.

Standard
MATHEMATICS, PHILOSOPHY, SCIENCE

Toward a quantification of intellectual disciplines

As a mathematician, I often find myself taking the STEM side of the STEM-versus-liberal-arts-and-humanities debate—this should come as no surprise to readers of this blog—and my principal conceit, that of a general claim to marginal productivity, quite often (and surprisingly, to me) underwhelms my opponents. So, I’ve been thinking about how we might (objectively) quantify the value of a discipline. May we argue, if we can, that quantum mechanics is “more important” than, say, the study of Victorian-period literature? Is the philosophy of mind as essential as the macroeconomics of international trade? Are composers of dodecaphonic concert music as indispensable to the socioeconomic fabric as historians of WWII? Is it really possible to make such comparisons, and should we be making them at all? The main question becomes this: Are all intellectual pursuits equally justified? If so, why should that be the case, and if not, how can society differentiate among so many disparate modes of inquiry?

To that end, then, I’ve quickly drafted eleven basic categories that I believe might serve us well in quantifying an intellectual pursuit:

professions

(I) Societal demand

This will perforce involve a (slippery) statistical calculation: average annual salary (scaled to cost-of-living expenses), the size of university departments, job-placement rates among graduates with the same terminal degree, or anything that betrays a clear supply-and-demand approach to practitioners of the discipline.

(II) Influence and range

How fertile is the (inter-field) progeny of research? How often are articles cited by other disciplines? Do the articles, conferences, and symposia affect a diverse collection of academic research in different fields with perhaps sweeping consequences, or does the intellectual offspring of an academic discipline rarely push beyond the limited confines of its field of interest?

(III) Difficulty

What is the effort required for mastery and original contribution? In general, we place a greater value on things that take increased effort to attain. It’s easier, for example, to eat a pizza than to acquire rock-hard abs. (As an aside, and apart from coeval psychosexual aspects of attraction—obesity was considered a desirable trait during the twelfth to fifteenth centuries because it signified wealth and power—being fit holds greater societal value because it, among other things, represents the more difficult, ascetic path, which suggests something of an evolutionary advantage.) Average time to graduation, the number of prerequisite courses for degree candidacy, and the rigor of standardized tests might also play a useful role here.

Citations

(IV) Applicability and usefulness

How practical is the discipline’s intellectual import? How much utility does it possess? Does it (at least, eventually) lead to a general increase in the quality of life for the general population (e.g., the creation of plastics), or is it limited in its scope and interest only to those persons with a direct relationship to its claims (e.g., non-commutative transformational symmetry in the development of a Mozart piano sonata)? Another way of evaluating this category is to ask the simple question: Who cares?

(V) Prize recognition

Disciplines and academic fields that enjoy major prizes (e.g., Nobel, Pulitzer, Fields, Abel, etc.) must often succumb to more rigorous scrutiny and peer-reviewed analysis than those whose metrics rely more heavily upon the opinion of a small cadre of informed peers and the publish-or-perish repositories of journals willing to print marginal material. This isn’t a rigid metric, of course; many economists now reject the Nobel-winning efficient-market hypothesis, and the LTCM debacle of the late 90s revealed the hidden perniciousness crouching behind the Black-Scholes equation, which also earned its creators a Nobel prize. (Perhaps these examples suggest something problematic about economics.) In general, though, winning a major international prize is a highly valued accomplishment that validates one’s work as enduring and important.

(VI) Objectivity

Are the propositions of an academic discipline provable, or are they largely based on subjective and rational or intuitive interpretation? Is it possible the value of one’s intellectual conceit could change if coeval opinion modulates to an alternate position? It seems logical to presume an objective truth is generally more valuable than subjective belief. 

math is purity

(VII) Projected value

What is the potential influence of the field’s unsolved problems? Do experts believe resolving those issues will eventually lead to significant breakthroughs (or possibly chaos!), or will the discipline’s elusive solutions effectuate only incremental and unnoticed progress when viewed through the widest available lens?

(VIII) Necessity

What are the long-range repercussions of eliminating the discipline? Would anyone beyond its members notice its absence? How essential is its intellectual currency to our current socioeconomic infrastructure? To one a generation or two removed from our own?

(IX) Ubiquity

How many colleges and universities offer formal, on-campus degrees in the field? Is its study limited to a national or even localized interest (e.g., agriculture), or is it embraced by a truly international, humanistic approach? The greater number of opportunities to study a subject, regardless of where you live, suggests a higher general value.

median earnings

(X) Labor mobility

Related to (9), is it difficult to find employment in different geographic areas of the country, or is employment restricted to a few isolated locations or even specific economies? Does an intellectual discipline provide a global reach or only, say, North American opportunities? Are there gender gaps or racial-bias issues to consider? How flexible is the discipline? Do the skills you learn allow you to be productive within a range of occupations and applications, or do they translate poorly to the labor market because graduates are pigeonholed into a singular intellectual activity? Can you find meaningful employment with lower terminal degrees, or must you finish a PhD in order to be gainfully employed? There are certain exceptions here: brain surgeons, for example, enjoy a very limited employment landscape and earning anything less than an M.D. degree means you can’t practice medicine, but these are examples of outliers that offer counterbalancing compensations within the global metric.

(XI) Probability of automation

What is the probability a discipline will be automated in the future? Can your field be easily replaced by a bot or a computer in the next 25 years? (Luddites beware.)

__________

Not perfect, but it’s a pretty good start, I think. The list strikes a decent balance across disciplines and, taken as a whole, doesn’t necessarily privilege any particular field. A communications major, for example, might score toward the top in labor mobility, automation, and ubiquity but very low in difficulty and prize recognition (and likely most other categories, too). I also eliminated certain obvious categories—like historical import—because the history of our intellectual landscape has often been marked by hysteria, inaccuracy, and misinformation. To privilege music(ology) because of its membership to the quadrivium when most people believed part of its importance revolved around its ability to affect the four humors seems unhelpful. (It also seems unfair to penalize, say, risk analysts because the stock market didn’t exist in the sixth century.)

Specific quantifying methods might involve a function f : \text{R}^n\to \text{R} with a series of weightings (possibly via integration) where n is the total number of individual categories, c_i, but the total value of a discipline, v_j, might just as easily be calculated by a geometric mean, provided no category can have a value of zero: v_j = \left(\prod_{i=1}^n c_i\right)^{1/n}.

Comments and suggestions welcome.

Standard
MATHEMATICS, SCIENCE

On making “Chewbacca Mom” Disappear

I take great solace in the fact that we could make “Chewbacca Mom” (hereafter CM) vanish—without being hurt in any way—if we could create the required Lorentz contraction utilizing Einstein’s gamma function:

L = L_r(1 - (\frac{v}{c})^2)^{1/2}

where L_r is CM’s length at rest. As her velocity approaches the speed of light (i.e., as v/c \to 1), CM (essentially) disappears before our very eyes! (And, yes, if the vehicle were large enough, we could fit the Kardashians inside, too. Don’t you just love science?)

Until technology catches mathematics and physics, though, I guess I’ll just have to keep filtering my news feed.

Standard
MATHEMATICS, PHILOSOPHY, RELIGION, SCIENCE

Is Atheism Irrational?

 

The following is an very interesting (and rather long) FB discussion about a NYTimes link I posted to my wall, which led to an enlightening debate concerning the viability of the Big Bang theory based upon stochastic measurements.  I have done my best to present the discussion in its original form.

Leon:  There’s probably an established name for this fallacy. Just because AP can’t imagine what sort of life might arise under different conditions doesn’t mean that it wouldn’t — and that it wouldn’t by into the same fallacy: “If conditions had been just a little different, our world could have been overrun with deadly WATER and life as we know it would have been impossible! This clearly proves that everything in the universe was shaped with our welfare in mind!”

PeterLeon, I generally agree, but changes to cosmological parameters don’t just lead to “deadly water.” It’s hard to imagine a universe that could sustain any life while precluding lifeforms of equivalent complexity / interest / value to humans. So when we talk about universes different from ours, we might as well be talking about universes with no life at all. This pushes the question back to whether life itself is a worthy criterion by which to judge a universe—and then back to whether the “worth” of a universe is even a coherent concept, absent human judgment. This article gives a sharp analysis.

David:  More important, I think, is the mathematics involved in the (very unlikely) probabilities associated with the current state of the universe—regardless of whether we wish to quantify that approach by burdening it with the concept of anthropocentrism. (And even if we do wish to pursue such an approach, anthropocentrism doesn’t seem to cast a greater shadow over creationism than it does the theory of evolution, which is, essentially, an anthropocentric theory concerned—if not obsessed—with humans qua the teleology of a “trajectory toward perfection.”) The notion of “life” is irrelevant, for example, if we’re limiting our discussion to the stochastic probability of the synthesis of a single protein chain subsequent to the Big Bang (1 in 10^125).

Peter:  David, I suspect both of our minds are already made up, but:

1. Evolution, as I understand it, is absolutely not human-centric or teleological. Quite the opposite: humans aren’t the end or destination of the process, just another branch on the tree.

2. Anthropocentrism (biocentrism, etc.) is still an issue in this discussion of probability.

The set of states containing a protein chain is no more or less improbable than an equivalently “sized” set of states without it. It’s hard to reason dispassionately about it, for the same reason it’s hard to imagine a world in which you had never met your wife. But *things would have happened* in all those other worlds too. When you say, “What are the chances that we would meet and fall in love?” you’re implicitly valuing this scenario above all the others. It’s the same with the probability argument you give above. The article I linked gives a neat rebuttal to *my* point on pp. 173–175. It really is worth a read!

Leon:  Peter, the article you linked to is very well-done (except for one thing that I will mention) and I learned from it. However, when I found my attention drifting halfway through and wondered why, I realized that no one, *really*, is making the pure logical argument that there might be *some* being that created the universe. Mr. Manson does a good job of pointing out that some debunking strategies are not really arguments, they’re rhetorical strategies. What I fault him for is not pointing out that claims such as Plantiinga’s above are also, just as much, rhetorical strategies rather than logical arguments. Manson does a good job of showing that what he calls the “moral argument” for some sort of creator requires there to be a moral value to creating conscious beings before anything in the universe existed. He then goes on to say he doesn’t know what arguing for such a value ex nihilo would look like. That’s right, and I don’t think anyone has done it, because anyone who gets to this point is really just providing rhetorical cover for saying that there must be a god. That, or if Manson takes the extra step in honesty and admits this, then he has to say that the moral argument is circular. And, in the spirit of following up on Manson’s analysis of the debunking rhetoric, I’ll point out that a lot of the success in Plantinga’s “argument from design” story is undoubtedly its ecumenical nature: it doesn’t mention any sects, so each listener gets to pencil in the name and characteristics of their own preferred god.

Dave, highly improbable events occur all the time; we don’t feel compelled to find divine explanations for them unless it reinforces our own personal narrative for the universe. The last time I saw a football game on TV, a team that was down by 5 threw a hail-mary pass as time expired. The quarterback threw it squarely to the two defenders. One of them jumped high up, caught the ball, and as he came down his hands hit the top of the helmet of his fellow defender. The ball bounced up and forward, over the head of the intended receiver, who did a visible double-take but managed to grab it and carry it into the end zone. I don’t know what the odds are on this, but no one feels obliged to find a divine explanation for this unless (a) they’re a really big fan of the winning team or (b) I end up getting inspired to become a football player by seeing this play and want to credit God with motivating me. That’s my response to the math: no one would care about the odds (except maybe “Wow! Cool.”) if they weren’t a way of reinforcing the emotional payoff of one’s chosen narrative about the overall situation.

Peter:  So, what about the alleged incompatibility of materialism and evolutionary theory? That seems like the novel part of AP’s argument, and I don’t really know what to make of it. My gut reaction is that there’s a problem with the reasoning around “belief” (in particular, why should we assume a priori that each belief is 50% likely to be true?), but I don’t know enough philosophy to really get it.

David:  @Peter: 1. We certainly know Darwin to have framed the concept of selection as a progression toward a state of “perfection,” and Lamarck even described the evolutionary trajectory as a ladder of ever-increasing moments of such perfection. So, even if a teleology isn’t explicitly stated, it’s heavily implied as an essential component of evolution’s general guiding principle. Also, I know of no examples within evolutionary biology where selection and adaptation have effectuated a regression to a less perfect state, so whether or not there exists intention (i.e., a Platonic teleology, etc.) with respect to the evolutionary process, there exists at least a teleomatic process that, through its natural laws, moves toward something that is “better than it was.” Of course, it might be more than that—say, an Aristotelian telenomic (i.e., a process of “final causes”)—but what we have is, at least, the law-directed purpose embedded within the process itself. Humans might not finally be a reification of the highest rung of Lamarckian “perfection,” but if we aren’t, that doesn’t necessarily efface the likelihood we exhibit a current state of perfection—“better than we’ve ever been”—which is still a shadow cast by (coeval) evolutionary predilections toward anthropocentrism. 2. I’m not quite sure I understand your point with respect to the mathematics-to-anthropocentrism link. Are you referring to James’s “infinite universes” when you speak of an “equivalent ‘sized’ set of states”? Also, I’m not sure why “valuing” 1:10^125 above some other p-event is necessarily a problem. We privilege it because of its importance to what ostensibly comes next.

@Leon: Sure, no one need hold for a divine explanation of events witnessed, say, during a football game, even in the face of highly improbable events—unless, of course, you’re a hard determinist—but I think you’re (inadvertently) misappropriating causation/intention; there’s no reason to entertain the possibility of design and authorship with respect to the very low odds involved in the path of your football, so an attempt to do so immediately strikes one as extremely odd, which suggests (erroneously) that the argument for the design of (generally) highly unlikely events is logically unsound. It’s easy to imagine the occurrence of unusual events when contemplating the (sum of the) discrete actions of autonomous agents within the confines of physical scientific laws, but in no sense do those events demand the possibility of, or need for, a “designer.”

But consider the following Gedankenexperiment: You are sitting at a table with three boxes in front of you. One box contains identical slips of paper, each with one of the twelve pcs inscribed on it; the second box also contains identical pieces of paper, and on each is written a single registral designation; the third box, like the others, contains identical pieces of paper, but, here, each piece of paper denotes a single rhythmic value (or rest). If you (or a monkey or a robotic arm) were to choose one piece of paper from each box randomly (with replacement) and notate the results, what are the odds you would create a double canon at the twelfth or even a fugue? I’m not going to do the math, but the p-value is an unimaginably small number. Yet if we were to suddenly discover a hitherto unknown sketch of just such a canon, who would presume it to be the result of a stochastic process? None of us. Why? Because the detailed complexity of the result—the canon itself—very strongly suggests a purposeful design (and, thus, a designer), so we would perforce reject any sort of stochastic probability as a feature of its existence. Is it not odd, then, that the canon’s complexity evinces the unimpeachable notion that a composer purposefully exhibited intention (and skill) in its creation, yet the universe—with its infinitely more complex structure and an unbelievably smaller probability of stochastic success—can be rationalized and dispatched by random (and highly improbable) interactions between and among substances that appeared ex nihlio?

Leon: Yes, Dave, I’m very much enjoying the discussion. Peter, I honestly think that Plantinga is just throwing in everything that occurs to him, in the hopes that it will stick. If that seems ad hominem, well, I just see him appealing to “But doesn’t it just seem ridiculous that…[new claim here]” over and over again, without any reasoning other than an appeal to “isn’t it just so improbable…” I think it’s perfectly okay to not address some of that, on the grounds that we’re not here to figure out a coherent argument for his rhetoric for him. Dave, it’s completely wrong to attribute teleology to Darwin and the theory of evolution that comes from him. He is something of a transitional figure, and may not have guarded his language against teleological implications as well as later workers did. But even during his lifetime, he was fiercely opposed by biologists who had explicitly teleological accounts of evolution, like Carl Nägeli; and by the end of the century this had become well-established enough that even people like Mark Twain (certainly not on the cutting edge of biology) could ridicule teleology via an argument by design: he said that if we take the total age of the earth as the height of the Eiffel Tower, then the period of man’s time on earth can be represented as the layer of paint at the top of it — and that saying that all of earth’s history was in service of bringing man into existence is like saying that the purpose of the Eiffel Tower is to hold up that top layer of paint.

David: Oh, I’m not suggesting “evolutionary teleology” ends with humans—though modern scientists often speak of humans with such reverence that they imply such a concept (e.g., Dawkin’s discussion of human brain redundancy, etc.)—but I am saying there exists a teleology of process (toward improvement/perfection) that is built into evolution’s core principles. You can’t have one without the other. Whether that “constant state of improvement” ends with human life is not my concern—though it’s difficult to imagine a change-of-kinds progression beyond human life (could the Singularity be that moment?)—but it seems to occupy the bulk of Plantinga’s conceit.

Leon:  […]and also, Dave, your gedanken experiment is well-taken, but in this and the original question I think you underestimate the vastness and tremendous age of the universe — under our current hegemonic cosmology, there have been planets in existence for ~10 billion years, and there are hundreds of billions of galaxies each containing hundreds of billions of stars. If your experiment is carried out at each star for a comparable length of time, I’m quite certain we’ll end up with thousands of perfectly appropriate canons. I also disagree with this example in that I believe that you’re working under an assumption that I’ll illustrate with the following story, taken from a philosopher that I’m not recalling: the edge of a road erodes, revealing some pebbles that spell out a sonnet of Shakespeare’s. We get very impressed by this, assuming it either to be somehow miraculous or a prank — in either case we take it to demonstrate intentionality of *some* sort. The philosopher’s point is that this reaction is an anthropocentric bias — *any* random arrangement of revealed pebbles is just as unlikely as any other, yet we don’t take the more random-looking ones as evidence of intentionality. It’s not quite that simple, of course; but as you pointed out, we don’t have a lot of space here. But I will say that given a sufficiently large number of roadsides, I’d expect a *lot* of things that “make sense” to appear, especially given that we conflate many things that “make sense” in the same way but have surface features that differ (the “same text” with different fonts or handwritings, for example) but we don’t do that with more “random-looking” arrangements. It also seems to me that you made the gedanken experiment because you think of life (or intelligence) on earth as something like the appearance of a Shakespeare sonnet on the wayside — evidence of intentionality. But to do so already assumes intentionality in the pre-life universe — that is, it’s circular reasoning. Teleology is a directionality imposed from without, not one that results from humans seeing a situation and imposing their thought habits. Some species get better at some of their life tasks because more of their handier members survive; in the absence of humans calling that a direction and privileging it as the “essential” nature of evolution, that’s no more teleological than water flowing downhill. Actual evolution theory actually points out that many, many organisms have very and obviously imperfect adaptations, yet as long as they can still survive they are not replaced by “fitter” species, nor do they keep evolving spontaneously just for the sake of evolving. And there are tons and tons of very “primitive” organisms on earth — like nematodes and bacteria, which probably make up the 90% of the earth’s biomass — that are so evolutionarily fit that they have not evolved since probably before the dinosaurs. There’s no teleology driving them. Also, this. 🙂

Peter:  Well, and even if Darwin did think selection was teleological (which, I dunno, maybe he did early on at least), theorizing about evolution didn’t stop there. Twain’s quip is clever, but putting humans at the top of the tower still seems like a 19th-century move. We’re probably an *extreme* of something, but I don’t think many evolutionary theorists would say we’re in a state of perfection, in either of the senses Dave outlines. It sounds like you’re thinking of evolutionary fitness as a universal quality that every organism has some amount of. But that’s not how it works: fitness is relative to a habitat. We humans are more “fit” than our predecessors in the sense that if you were to drop one of our hominid ancestors into most present-day human habitats, it wouldn’t do so well. (It would probably be terrible at music theory, for instance.) But that’s not because we’re universally more “fit” or better adapted for life in general. Plenty of organisms survive in habitats that would kill us instantly. Fitness is optimized over shorter spans than environmental change, so we can pretty much assume that everything that survives and reproduces is at a local maximum of its fitness landscape. But that doesn’t mean it’s more fit than its ancestors were, or less fit than its descendants will be. [edit: …in the long term, I mean.] The double canon example is great, but I think it illustrates my point better than yours. If we looked up in the sky and saw the stars and comets arranged into a double canon, or if one were somehow encoded into our DNA, then yes, we’d be compelled to look for some intelligent composer. That would be really cool! And it would be statistically unlikely, because we can imagine scenarios in which things could have gone differently, and we wouldn’t have observed those things. (Our actual world being one of them.) But that’s not the same kind of evidence provided by our existence in the universe, because there’s no scenario in which we would have been able to observe our nonexistence. The improbability of our existence just doesn’t bear on the question. [Edit: In oh-so-fashionable Bayesian terms, P(universe|people) is 1, no matter what P(universe) may be.]

David:  @Leon: My thought experiment only meant to suggest that sufficient complexity, beyond the bounds of any sort of reasonable levels of stochastic probability, strongly suggests design. It’s not circular reasoning because we invoke that logic each time a new sketch is discovered. Your counterpoint sounds a lot like the infinite monkey theorem. But, as I’ve described in my blog, the math doesn’t even work; infinite exponentiation on the interval (0,1) always approaches zero. So, we always have a fatal contradiction: the p-value of an event cannot be both certain and impossible. Imagine a boy trying to throw a ping-pong ball into a paper cup 60 yards away. There’s a big difference between 100 billion earths, each with a single boy trying to throw the ball into a cup 60 yards away, and 100 billion boys covering the first earth, each with a ball they will attempt to throw into the cup.

Leon:  Peter, I totally agree with you about evolution not being a single monolithic structure with humans at the end of it. But I would quibble: I do agree that “fitness” is not some abstract quality that everything now has in greater measure than the past and lesser measure than the future. But as far as it proceeding more slowly than environmental change, we’ve certainly upset that. And mass extinctions are a counterexample. And even in a stable environment, one of SJ Gould’s flagship examples of bad adaptation was the panda’s “thumb”, which is certainly not an optimal adaptation. It just works well enough to keep the pandas going, and that’s enough.

Peter:  You’re absolutely right about mass extinctions and catastrophic events—I should have been clear. But is it still fair to say that the panda thumb is a case of a local maximum in the fitness landscape? Like, small “steps” around it were worse? What I meant to say was just that evolution isn’t solely driven by competition in a stable environment—which is what the teleological, constant-improvement model assumes. Also, yes—this is a super fun conversation! If only I could get this excited about the work I’m *supposed* to be doing.

David:  Quick insertion: Humans have vestigial organs, but that doesn’t mean we must jettison the commonly-held belief that humans represent a “local maximum,” although, if we follow the metaphor precisely, that phrase presumes a decline after the peak, which doesn’t really describe any of the evolutionary biology I’ve read. “Maladaptations” and extinctions, I think, should also be contextualized within the larger trajectory of “progress”—the whole survival-of-the-fittest thing (not the survival-of-everything thing). I’ll have to come back to the canon example!

Peter:  Yay, my favorite conversation is back! Are you sure survival-of-the-fittest should be characterized as “progress”? I’m certainly not an expert on evolutionary theory, but I get the strong impression that that could be true only in the relatively short term, in a stable environment (as I tried to say above). Wikipedia cites S. J. Gould on this: “Although it is difficult to measure complexity, it seems uncontroversial that mammals are more complex than bacteria. Gould (1997) agrees, but claims that this apparent largest-scale trend is a statistical artifact. Bacteria represent a minimum level of complexity for life on Earth today. Gould (1997) argues that there is no selective pressure for higher levels of complexity, but there is selective pressure against complexity below the level of bacteria. This minimum required level of complexity, combined with random mutation, implies that the average level of complexity of life must increase over time.”

David:  I would say, quickly, though, too that the very high “improbability of our existence”—based on the sheer math involved—has quite a bit of bearing on the probability of design, imo. In fact, that was the whole point of my double-canon example. Why don’t we ever consider a double canon—when we find one—to have been created by stochastic processes? I think the notion of “progress” is inherent in the concept of “survival.”  What doesn’t survive obviously cannot progress.

Peter:  Actually, that just shows that “survival” is inherent in “progress.” What survives still does not necessarily progress. And what would a biological definition of progress look like, anyway? The problem with the probability argument is that the universe is a precondition for our observation. When we observe that the universe happens to be just right for us to exist, or that we seem to exist despite incredible odds, what does this tell us? This is exactly the question that Bayes’s theorem is built to answer: “How likely is Y, given X?” How likely is it that the universe would have these properties, given that we exist? If it’s the only sort of universe that could support intelligent life, then, well, 100%. Ha! It turns out my argument has a name, and can be expressed MUCH more clearly. It’s the Weak Anthropic Principle: “…the universe’s ostensible fine tuning is the result of selection bias: i.e., only in a universe capable of eventually supporting life will there be living beings capable of observing any such fine tuning, while a universe less compatible with life will go unbeheld.”

Leon: Dave, it actually strikes me that there are two ways to take your thought experiment. One is, as you say, that the result you discuss is very improbable, therefore perhaps someone did it on purpose. This strikes me as more of the “young earth,” hands-on creation position, not the one that you or AP are floating here. The other approach is less about the system’s ability to generate the canon than the idea that if there’s some process in place that *can* generate such a canon, then the process must have been set up by an intelligence that had such canons in mind. This seems closer to Plantinga’s (and your?) approach in saying that if this universe *can* produce life, it must have been set up that way on purpose. Is that correct? (Though of course, with God as a non-timebound, less anthropomorphic being, perhaps there’s not so much of a difference between these two ways of looking at things).

David:  Okay, I think “evolutionary teleology” derives from its principles. That is, “progress” is an inherent feature of evolutionary design and not some exogenous thing slapped onto its structure ex post facto. That doesn’t mean everything needs to change constantly—there are periods of stasis (i.e., localized temporal optimizations)—but it does suggest that when things move, they move in one direction. When things stop moving (forward), when organisms stop evolving and adapting (in the long run) in ways that are beneficial to their survival, they eventually become extinct; thus, even the notion of extinction becomes a feature of a more diachronic concept of progress. Water flows down the hill (and even pools into puddles of stasis) because of the “teleology” established by the law of gravity. If we reject the teleological notion of progress—the idea that adaptation and fitness are random, non-directed processes—evolutionary biology becomes a much tougher sell, imo. I’m not really interested in fit-for-life arguments of the universe, even though that concept drives Plantinga’s conceit. I do not reject the possibility of stochastic double canons because composers exist; I assume composers exist because the p-value of a stochastic double canon is impossibly small. This allows me to sidestep the problems associated with Bayes’s theorem. I’ll have to come back for the rest…including Leon’s interesting parsing.

Peter:  Okay, I see where you’re coming from re: evolution, and I agree that natural selection does generally lead to greater “fitness.” In fact, I’m pretty sure that’s how we define fitness: that which is maximized by natural selection. But it has nothing to do with a “trajectory toward perfection,” as you said way back at the start of the thread. Fitness isn’t concerned with perfection (in the sense of “freedom from defect”), only with survival and reproduction *in a particular ecosystem*. Actually, Wikipedia tells me that the phrase “survival of the fittest” is actually a misquotation: Herbert Spencer’s original formulation was “survival of the best fitted.”

David:  True. Perhaps I should have described it as a “trajectory toward a greater perfection.” Is it possible there exists a permanent-yet-imperfect evolutionary state that just, well, stops? Leon, I’m not quite sure I understand; there IS such a process because I created it, lol: one picks from three boxes, each filled with unique and discrete musical elements. The probability of that process creating the desired result, however, is truly minuscule. In fact, let’s put a face on it. If we assume an eight-octave range, common time, and both note and rest values no shorter in duration than a sixteenth note (and no longer than a whole note), the p-value for generating (only!) a C major scale in quarter notes (within one octave) is given by

p = [(1/12)(1/8)(1/32)]^7 \approx 3.87 \times 10^{-25}

A 40-note composition with uniquely determined values approaches 3.19 x 10^-140. (There are only 10^80 atoms in the observable universe.) Imagine the p-value in generating Bach’s Contrapunctus I from BWV 1080! So, okay, what do these numbers mean? Well, it’s simple: you’d have a MUCH better chance of closing your eyes and, with one trial, picking the single atom (within the observable universe) I’ve labeled “Leon” than creating a 20-note dux (with comes) by using the three-box method I’ve described. But, Leon, if you’re suggesting a “process” that with a HIGH PROBABILITY does, in fact, create, say, Baroque-era canons with invertible counterpoint, then I’d say the process IS the intelligence itself, which is my point. I can create the canon, but I can’t generate a larger number from the product. Of course, I could create such a high-valued stochastic process by severely limiting the variables (e.g., controlling p-values for each input, etc.), but rigging the task to be less demanding cannot be evidence for the feasibility of the more difficult one. (And my model could be made even more difficult!)

Peter:  > “Is it possible there exists a permanent-yet-imperfect evolutionary state that just, well, stops?”< Sure, if the environment holds still and other competing species also stop evolving. More seriously, what about cockroaches, or bacteria? They’ve been around in roughly their current forms a heck of a lot longer than we have. I guess my big point is that in the big picture, evolution isn’t a trajectory *toward* any particular destination—more like an expansion around an origin. See the link above about “largest-scale trends.” > “The probability of that process creating the *desired result*, however, is truly minuscule”<

This analogy depends on our being the “desired result,” which is (I think) what Leon was poking at a few comments ago. It begs the question, IMO.

Leon:  It’s really fascinating to me to discover completely unexpected ways that we misunderstand each other. 🙂

David:  Okay, what am I missing, lol?  I’m not referring to humans as the desired result, Peter. I’m more than content to limit the discussion to protein-chain synthesis—with or without humans. As for the stalled-evolution hypothesis, I feel much more comfortable with the notion that each “thing” is (largely) a discrete entity. Why some evolving bacteria, cockroaches, and fish but not others? Is it somehow “fitter” to be a bacterium rather than a human in 2014? There are sizable hurdles there, imo. As an aside, can we indulge in the notion of a Platonic canon at the tenth, lol? 🙂

Peter:  > Why some evolving bacteria, cockroaches, and fish but not others?<

I’m not sure I understand. If the question is why mutations aren’t possessed by every individual in a species, it’s just the way DNA works: mutations are random. If the question is why populations diversify and speciate, it depends on the degree to which they maintain contact as they split.

> Is it somehow “fitter” to be a bacterium rather than a human in 2014?<

Only if bacteria displace humans. If we’re not in competition, then no. Relative fitness is defined only among competing genotypes (see the Wikipedia link to “Fitness,” above). Okay, I’m gonna try one last time to sum up my objections to the probability argument.

1a. Whenever we describe the probability of an event, we do so in terms of a sample space. For example, when someone rolls two dice together, the chance of getting double sixes is 1/36, because the sample space includes 35 other combinations, all equally likely to occur.

1b. Current physics describes many cosmological configurations, all equally physically valid, the vast majority of which could not sustain intelligent life. In this sample space our universe is improbable, bordering on impossible.

2a. When we observe a spectacularly unlikely event that borders on the impossible, that can give us doubts about the way we’ve constructed the sample space. For example, if we dump out a bucket of dice, and they all come up 6, it’s a pretty fair bet that they were loaded, and not all configurations were in fact equally likely. (Or in your canon analogy, its low entropy suggests to us that it was composed by traditional canonic process, rather than by some stochastic one that would inflate the sample space.)

2b. Analogously, the improbability of our universe suggests a problem with the sample space. For you, the conclusion is that our universe wasn’t created by a random roll of the cosmic dice, but rather was designed with an eye toward this outcome. Another explanation would be that the cosmic dice have been rolled again and again, and this is the only outcome that we (as intelligent beings) could ever observe. From what I can tell, most physicists find this plausible (the debate now seems to be about “where” these other universes are). This improbability (by itself) is NOT evidence of multiple universes, nor of a designer. It just doesn’t weigh on the question either way. It’s analogous to a bucket of dice rolled by someone else in a room that we’re invited into *only* if/when all the dice come up six. Given that scenario, it doesn’t matter how many dice are in the bucket, or whether or not they’re loaded: the only result we will ever see is the one with all sixes.

David:  1a. Yes, all p-values within a distribution will sum to one, but if we’re interested in rolling double sixes, 1/36 will be our focus for a single trial, though it might very well take 70 trials to get the desired result.

1b. Yes, and I’d phrase it this way: a single “trial” effectuated by the Big Bang yields a p-value so small such that the likelihood of some stochastic design of the current cosmological configuration (or even a configuration without human life) very quickly approaches, and is, for all intents and purposes, zero.

2a. Precisely. A bucket-of-sixes event strongly suggests an intervention of some kind; we do not presuppose the fact that we’ve witnessed some sort of unbelievably rare stochastic moment (i.e., [1/6]^n : n = the total die count). A bucket of 200 dice yields a p-value that approaches 2.35 x 10^-156. (Again, there are only 10^80 atoms in the observable universe.) The same logic, of course, is inferred when we unearth a double canon at the tenth; though, a canon’s p-value is much, much smaller than that of a bucket of sixes.

2b. As a theorist and mathematician, I’m saying, as we did in 2a, that there exists an intervention with respect to such minuscule p-values, that stochastic processes are a very poor explanation for our cosmological result. As a Christian, I believe that intervention involves an omniscient God, just like person X composed the impossibly unlikely canon (with, as you suggest, an incredibly low entropy) rather than a robotic arm pulling pieces of paper from three boxes. Also, mathematics has only proven eleven dimensions, yet that does not simultaneously prove at least eleven “parallel universes.” Four of those dimensions, as you know, are firmly rooted within our present (single) universe. So, there’s no proof that, say, an infinite number of Big Bangs took an infinite number of stochastic cracks at generating our current cosmology. And even if that WERE the case, the math is still restrictive. Each Big Bang attempt would have a near-zero p-value for the current cosmology, and Bernoulli’s law of large numbers essentially guarantees such a near-zero p-value at an infinite number of trials. A single universe-trial does not involve a non-replacement p-value (e.g., pulling a marble out of a bag and putting it in your pocket); you don’t approach p = 1 at an infinite number of trials, though that seems to be a common mistake people make. It’s like the analogy I described earlier—that of a near-infinite number of earths, each with a single child trying to throw a ping-pong ball into a Dixie cup 60 yards away. The p-value for each discrete earth does not change—assuming uniform laws of physics and consistent variables (e.g., wind speed, topography, etc.)—and the earths are not working in tandem to reduce the improbability of the event…as would a single child who could throw 100 billion balls at once.

Anyway, the theory as it is currently taught, however, is that a single Big Bang event (read: trial) created a stochastic chain that created the cosmology that surrounds us—like dumping a near-infinite number of cosmic buckets filled with fair dice and arguing that every die from every bucket lands on six. That’s impossibly unconvincing. We’re also assuming that these bucket rolls never have deleterious effects when twos, threes, and fives emerge. A true p-value for cosmology would have to include the likelihood of internecine stochastic combinations that would immediately end the process. So, there’s serious doubt as to whether the universe would even be “allowed” to get an infinite number of “bucket dumps” before we’re asked to enter the room. I guess I’m just perplexed, too, by the notion that we’re unwilling to give stochastic processes the benefit of the doubt when it comes to canons and bucket-dumps, but we’re more than willing to make them the bedrock of the most statistically improbable event(s) involved in creating the universe. In that limited sense, then, as the article queries, I do believe atheism to be irrational.

Peter:  The analogy is flawed because our observation of canons and buckets is unrestricted: we can, in principle, observe any result in the sample space. Same with the Dixie cups: if we want to make the analogy work, then we’re not standing next to some arbitrary boy, watching him throw ping-pong balls. We’re a tiny creature that’s generated inside a Dixie cup the moment a ping-pong ball lands inside it. All that’s necessary to explain our existence is that there be enough boys, balls, and cups that it could plausibly happen at least once, in *some* trial. The p-value can be as low as we want for any single trial—the selection bias, [which] ensures that we can only ever observe the successful one. This isn’t just my crazy idea; it’s a fundamental principle of statistics. At this point I’ve explained it as clearly as I can, so if you still have a problem with it, it might be time to appeal to a higher court.

> Anyway, the theory as it is currently taught, however, is that a single Big Bang event (read: trial) created a stochastic chain that created the cosmology that surrounds us […] That’s impossibly unconvincing.<

I agree! As I mentioned above, most physicists seem to agree too, and since they noticed this problem, they’ve proposed various multiverse scenarios that provide an adequate number of “trials.” (This is different from the more well known “parallel universes” of some interpretations of quantum mechanics, which share the same physical properties.) Obviously, I understand virtually none of the real physics here, but it’s so much fun to grapple with the general conceptual outlines—as cool as any science fiction. I hope we now agree that a suitably large number of “trials” would solve this problem. You also seem skeptical that the universe would get that many tries, but I don’t see why not. The eleven dimensions of spacetime aren’t a problem, since more universes doesn’t mean more dimensions: you can “stack” infinite n-dimensional spaces in an (n+1)-dimensional space. (And anyway, that’s irrelevant in current models—see below.) You also mention “internecine stochastic combinations that would immediately end the process.” Could you elaborate on that? It seems like it could make sense in a cyclic model, with one universe at a time—but there are *plenty* of alternatives. So, if you’re curious what physicists say about this, here are a few theories I’ve come across—I will inevitably butcher them, but as always, there are better explanations on Wikipedia and elsewhere: (a) eternal inflation: the universe actually expands much faster than the speed of light, and different regions of spacetime are “far enough apart” from each other (in some sense) as to be “causally unconnected.” So they have different sets of parameters. This seems to be very popular these days. (b) black hole cosmology: each black hole is the boundary of a of a new “universe” that may have its own parameters. Not only does that imply that all the black holes in our universe are themselves baby universes, but it also implies that we ourselves are stuck in some other universe’s black hole! How metal is that? (c) mathematical universe hypothesis: this one is so crazy that even its creator Max Tegmark claims not to believe it. The idea is that the fundamental level of reality isn’t particles, fields, or branes, but rather math itself. Every mathematical system is its own universe—not just a description of one. Honestly, this sounds kind of dumb and self-defeating to me, but Tegmark is a smart guy who has forgotten more math than I could learn in a lifetime. So hey, if he says it’s possible, that’s cool. As for your final question, which is probably at the heart of this, I prefer physical explanations because they’ve worked well in the past. Maybe they will break down at some point, and the only answer available will be “God did it”—but it hasn’t happened for any such question in the past, and it’s not clear to me that this is the exception. As for why it’s stochastic instead of directed in some way, that’s just the null hypothesis. There may well turn out to be reasons why some configurations are preferred, but AFAIK we have no reason to assume that at this point. Sorry, I should have been clear: I also agree that atheism is irrational. To my thinking, the rational position at this point (which is not to say the best!) is agnosticism.

David:  A quick comment for now, but I’ll write more later: You keep insisting on observation as a necessary condition for my argument, but I’ve never made that assumption. Plantinga did—and you have with the bucket-of-dice metaphor—but I’m really only interested in “Platonic” events. We might never witness the boy on the nth earth trying to get his ball in the cup or the robotic arm reaching into the boxes (or the resulting composition!), but that has no bearing on the p-value of the trial. We don’t need to be there at all—in the cup or next to the boy or even on the same planet! My qualms with extraordinarily low-entropy p-values are distinct from whether or not we ever “observe” them, so neither selection bias nor Bayes’s theorem has any relevance with respect to my arguments. These points, I thought, were obvious because the bulk of our discussion has involved p-values of wholly unobservable events (e.g., protein-chain synthesis after the Big Bang, etc.), but perhaps I should have been clearer.

As for dimensions, I agree…and, as I think you’d agree, too, more dimensions don’t necessarily prove the multiverse, which, some physicists say, is simply the union of all “parallel” universes (as opposed to the “forked” theory proposed by QM). Physicists also suggest such universes might very well have different physical constants, which doesn’t help us much when we’re talking about p-values with respect to the current cosmology. I don’t believe a larger sample space gets us there either. (1) There’s no evidence for the multiverse (or forking parallel universes), (2) the vast majority of the enlarged sample space would involve sterile universes incapable of sustaining any kind of life, (3) an infinite sample space means infinite exponentiation on (0,1) that approaches not one but zero, and (4) current cosmological evidence (e.g., cosmic background radiation (CBR), etc.) only supports a single trial. I’m familiar with the first two theories you mentioned, and I know inflation is very popular because it answers a lot of questions, including (1) the “flatness problem”—a feature of the permissible range for Omega values (the ratio of gravitational to kinetic energy)—and (2) CBR homogeneity. As for cosmological conflicts, there are many…everything from problems of initial inhomogeneity and UV radiation to “permittivity” of free space and interfering cross-reactions within the process of amino acids forming peptide bonds. I guess the “null hypothesis” is the heart of the matter. Though I would never suggest very low p-values are, in and of themselves, proof of design, I feel such extreme improbabilities strongly suggest a designer—or, at least, strongly argue against chance. There’s something extraordinarily unnerving about the idea that the stochastic process involved in generating the human genome equals a range of something like (1 in) 4^-180^110,000 to (1 in) 4^-360^110,000. Numbers like that suggest something beyond the merely improbable…well beyond canons, buckets of dice, bouncing footballs, and even protein chains.

Peter:  > (1) There’s no evidence for the multiverse (or forking parallel universes), (2) the vast majority of the enlarged sample space would involve sterile universes incapable of sustaining any kind of life.< Well, anthropic reasoning more or less interprets (2) as the reply to (1). If universes that can sustain life are statistically unlikely, and we know there is at least one that can (ours), there are probably many others that can’t. And at least some physicists think there may be other ways to observe universes, as crazy as it sounds. So I don’t think these are evidence against it. > (3) an infinite sample space means infinite exponentiation on (0,1) that approaches not one but zero< Are you sure about that? If p is the probability that any randomly-generated universe can sustain life, and n is the number of universes, then the probability that *at least one* can sustain life is 1 – (1 – p)^n, which approaches 1 as n goes to infinity. I think you’re taking p^n, which is the probability that *every* universe can sustain life. (And note that I’m not doing any non-replacement stuff either—p is the same for every trial.) > (4) current cosmological evidence (e.g., cosmic background radiation (CBR), etc.) only supports a single trial. < AFAIK, none of the multiverse models make any predictions about that. Every universe will have its own background radiation, and we wouldn’t expect it to “leak” from one to another. Unless I’ve missed something? > There’s something extraordinarily unnerving about the idea that the stochastic process involved in generating the human genome equals a range of something like (1 in) 4^-180^110,000 to (1 in) 4^-360^110,000.<

It’s funny—I think we (as intelligent lifeforms) are pretty insignificant in the bigger picture, and it’s unnerving to imagine that this huge, empty universe would have been created just for us! Oops! I should have read before shooting my mouth off: there are at least some people who claim to have found evidence that our universe bumped into another one (an Arxiv paper linked from Wikipedia). But that’s pretty recent and (I think) controversial.

David:  > Well, anthropic reasoning more or less interprets (2) as the reply to (1). If universes that can sustain life are statistically unlikely, and we know there is at least one that can (ours), there are probably many others that can’t. And at least some physicists think there may be other ways to observe universes, as crazy as it sounds. So I don’t think these are evidence against it. < Well, I’ve tried to frame my arguments in a way that bypasses anthropic reasoning. (1) means we don’t really need to think about a very large sample space, and (2) becomes irrelevant to our discussion of p-values that relate to our current cosmology. > Are you sure about that? If p is the probability that any randomly-generated universe can sustain life, and n is the number of universes, then the probability that *at least one* can sustain life is 1 – (1 – p)^n, which approaches 1 as n approaches infinity.  I think you’re taking p^n, which is the probability that *every* universe can sustain life. (And note that I’m not doing any non-replacement stuff either—p is the same for every trial.) < Well, I was giving cosmologists the benefit of the doubt and assuming the possibility that quantum fluctuations replicate the process involved in our current cosmology (thus, p^n), but if we’re only interested in “at least one” universe—and, perhaps, our current universe is a reification of that event, which might suggest the other universes do not sustain life—the formula is almost too convenient to be helpful; it states that every non-impossible event is guaranteed to occur (at least once) over an infinite number of trials. I’ll leave it to you to imagine at-least-once events that offer fatal contradictions. > AFAIK, none of the multiverse models make any predictions about that. Every universe will have its own background radiation, and we wouldn’t expect it to “leak” from one to another. Unless I’ve missed something? < Well, assuming the multiverse requires space to exist, those universes couldn’t exist apart from the BB. That is, current cosmological models suggest the Big Bang created space (i.e., before it, there was REALLY nothing); inflationary models allow space to travel faster than the speed of light in order to preserve relativity theory, and AFAIK quantum fluctuations can theoretically create mass without the cosmic fireworks of CBR. Conserving the laws of thermodynamics, though, means the duration of these “spontaneous” masses is incredibly small and unobservable (as are the masses). > It’s funny—I think we (as intelligent lifeforms) are pretty insignificant in the bigger picture, and it’s unnerving to imagine that this huge, empty universe would have been created just for us! <

Unless you think about a fantastically creative and loving God who chose to have a relationship with us, despite our incredible insignificance!

Peter:  I get that you want to bypass the anthropic principle, but as long as we’re reasoning from our actual experience in the universe, you can’t. It’s a general principle of reasoning about observations. If you want to talk about “Platonic events” divorced from our human perspective, that’s great, but then the unlikeliness of our universe doesn’t demand explanation: any other universe would have been equally unlikely, and there’s nothing obviously special about ours a priori. (Neil Manson, linked above, addresses what it means to take “life” or “intelligent life” or “protein synthesis” to be special—it’s not simple.) While I’m sympathetic to the complaint that this multiverse stuff is “too convenient”—that it explains everything equally well—the divine-creator explanation has the same flaw. As you may know, there are some physicists who consider multiverse theories “unscientific” for precisely that reason. [edit: I left out this paragraph by mistake] The question of evidentiary support is well taken. There doesn’t seem to be consensus on what would even count as evidence for a multiverse—though that’s hardly a unique scenario for scientific theories, including some that have gone on to be vindicated (e.g., evolution, quarks, cosmic inflation). So no, I don’t think multiverse theories are self-defeating, at least not at the point you identify, nor do I think it’s driven by a refusal to accept the divine creator. It’s about a commitment to natural explanation before supernatural. [end edit] > it states that every non-impossible event is guaranteed to occur (at least once) over an infinite number of trials. I’ll leave it to you to imagine at-least-once events that offer fatal contradictions. I don’t follow: doesn’t “non-impossible” preclude “fatal contradiction”? Anyway, the anthropic argument doesn’t call for an infinite number of universes, just enough for there to be one that sustains life. If indeed there are infinite universes (as some physicists think), then the situation is even worse than you describe: “anything that can happen will happen an *infinite* number of times.” (See pp. 18–19 of this paper for a sketch the proposed solutions to this “Measure Problem.”)  Note that the last couple items in the bibliography (Freivogel and Nomura) have a lot more to say about this; it’s well-trod ground.

> Well, assuming the multiverse requires space to exist, those universes couldn’t exist apart from the BB. <

Space is complicated. Like I said, our universe is “causally unconnected” to other universes—either by the event horizon of a black hole, or by some of the more subtle general-relativistic stuff in eternal-inflation theory. So no, we wouldn’t share the same Big Bang in any observable sense. As for how fluctuations produce inflationary “bubbles,” my initial guess was that they were a different kind of fluctuation from the standard vacuum uncertainty, which is how I found the paper linked above. The calculations start on p. 2, but they smacked me down pretty hard. I really do want to learn this stuff some day… sigh.

>There’s something extraordinarily unnerving…it’s unnerving to imagine….Unless you think about a fantastically creative and loving God.<

Right. I just meant that “unnerving” in the eye of the beholder. Sorry for constantly referring to the Neil Manson article—I get that you don’t have the time or inclination to read these things—it’s just hard to summarize. Here’s the relevant bit from the abstract, which follows a defense of the “design argument”: “Lastly, some say the design argument requires a picture of value according to which it was true, prior to the coming-into-being of the universe, that our sort of universe is worthy of creation. Such a picture, they say, is mistaken, though our attraction to it can be explained in terms of anthropocentrism. This is a serious criticism. To respond to it, proponents of the design argument must either defend an objectivist conception of value or, if not, provide some independent reason for thinking an intelligent designer is likely to create our sort of universe.” The full argument appears in pp. 172–175. This is why I keep referring to anthropocentrism.

David:  I do have the inclination…just not as much time, which is why I haven’t responded to date. All apologies.

> I get that you want to bypass the anthropic principle, but as long as we’re reasoning from our actual experience in the universe, you can’t. It’s a general principle of reasoning about observations. If you want to talk about “Platonic events” divorced from our human perspective, that’s great, but then the unlikeliness of our universe doesn’t demand explanation: any other universe would have been equally unlikely, and there’s nothing obviously special about ours a priori. < I’m not sure why I can’t, lol. I’m interested in discussing the stochastic probability of protein chains and peptide bonds and DNA sequencing subsequent to the BB (but before our emergence onto the scene as conscious, observant beings), all of which are wholly unobservable events. In fact, most of the probabilities in the universe might be considered “Platonic” (i.e., unobserved)—from the imminent explosion of distant quasars and formation of black holes to the 46.3 percent probability that the tree in the wooded midway on my way to work will be uprooted at wind speeds exceeding 72.4 mph. That approach doesn’t necessarily demand anything, but discussing the origin of the universe in the absence of a designer places the burden on physics and mathematics (specifically, probability theory)…and THAT does demand investigation. > While I’m sympathetic to the complaint that this multiverse stuff is “too convenient”–that it explains everything equally well–the divine-creator explanation has the same flaw. As you may know, there are some physicists who consider multiverse theories “unscientific” for precisely that reason. < True. The difference, though, is that faith does not require proof…despite Hitchens’s claims to the contrary. In fact, the Bible says faith IS the proof (Hebrews 11:1). I don’t mind “convenient,” but “logically easy” rubs me a bit the wrong way. (The “at least one” (ALO) formula for infinite n is an example.) I understand some might consider faith to be “logically easy,” but I’m comfortable with the notion that faith is completely different from, and directly opposed to, science. > The question of evidentiary support is well taken. There doesn’t seem to be consensus on what would even count as evidence for a multiverse–though that’s hardly a unique scenario for scientific theories, including some that have gone on to be vindicated (e.g., evolution, quarks, cosmic inflation). So no, I don’t think multiverse theories are self-defeating, at least not at the point you identify, nor do I think it’s driven by a refusal to accept the divine creator. It’s about a commitment to natural explanation before supernatural. < I think it’s a bit problematic to lionize natural explanations as a feature of coeval scientific understanding. We’ve seen many times throughout history that science very often “got it wrong” in light of new evidence. That’s not to say science isn’t incredibly valuable and insightful—it is—but it is finally limited in its capacity to explain events based upon restricted observation(s) and imperfect knowledge. Again, in the absence of a designer, we have no choice but to follow such a path, but that’s why we need to be careful. Many of these theories exist without any serious physical evidence. That’s fine, but that’s also why I’m focusing on abstract p-values because they offer a more substantive and dispassionate line of inquiry with respect to “natural explanations.” > I don’t follow: doesn’t “non-impossible” preclude “fatal contradiction”? Anyway, the anthropic argument doesn’t call for an infinite number of universes, just enough for there to be one that sustains life. If indeed there are infinite universes (as some physicists think), then the situation is even worse than you describe: “anything that can happen will happen an *infinite* number of times.” < Not at all. (And, again, I’m not making an anthropic argument.) Here’s a “trivial” example: Assume we can establish a p-value involving whether or not God created the universe. Motive for the creation is irrelevant. (Perhaps this number is simply the complement of the probability that stochastic processes “created” the universe.) According to 1 – (1 -p)^n for an infinite n, even if that value is vanishingly small—and, as an agnostic, I imagine you’d argue p > 0 (otherwise, you’d be an atheist)—then it is the case that p approaches 1 as n approaches infinity. And if, according to Guth, every event (that can occur) will occur an infinite number of times, then that’s essentially the same thing as saying every possible event (E) will occur within each of the infinite universes (if not multiple times within a single universe). This is why I invoked p^n = 1, which contradicts the calculation that infinite exponentiation on (0,1) approaches zero.

Anyway, there are different metrics we can use, too.  For example, I think the Poisson distribution

P(x;\mu) = e^{-\mu}(\mu^x)(x!)^{-1}

might be a better p-measure for the stochastic probability of the universe; here, the p-value approaches zero as the mean of the sample space approaches zero, even for an infinite x. That seems much more intuitive to me: for an extremely small p-value for a single trial, which, in a real way, becomes the mean case for the stochastic probability of our current universe, the probability of future successes decreases. This is the opposite of the mechanism behind the ALO equation where the probability increases as the number of trials increases. Another model I prefer involves curves like exponential decay (and equations like it); for example, the simple non-homogeneous differential equation

dp/dt = te^{-3t}-3p

is one such (general) reification of a curve modeling (what I believe is the basic notion of) probability over time subsequent to the BB. For the sake of completeness, the general solution is

dp/dt + 3p = te^{-3t}\\ (dp/dt + 3p)e^{\int3 dt} = te^{-3t + \int 3 dt}\\ \int (d(pe^{3t})/dt) dt = \int t dt\\ p(t) = (t^2e^{-3t})/2 + \delta e^{-3t}

where the Euclidean metric

[p(t_n)^2 - p(t_{n-1})^2]^{1/2} \rightarrow 0

as t approaches infinity, which is what we want. This is intuitive: if after the BB, space expands faster than the speed of light, pulling matter behind it (though not quite at the SOL) in a nearly homogeneous way, it seems incredibly unlikely that, over time, the necessary material would have the opportunity to create protein chains and the like, especially when the force and velocity of Guth’s slow-roll inflation inexorably pushes that material further apart through the expansion.

> Space is complicated. Like I said, our universe is “causally unconnected” to other universes–either by the event horizon of a black hole, or by some of the more subtle general-relativistic stuff in eternal-inflation theory. So no, we wouldn’t share the same Big Bang in any observable sense. <

Whence, then, the space for those universes? Are we to assume that an infinite number of BBs (i.e., quantum fluctuations) begat the infinite number of universes? That’s more difficult to believe than (an infinite number of) fluctuations within our own universe as the catalyst for the multiverse; in fact, Guth’s paper suggests that very notion. Anyway, I’d much prefer an investigation of objective p-values rather than debating diachronic theories of cosmology. I think part of the issue is that we don’t fully comprehend the magnitude of the improbabilities with which we’re dealing.

Jeff:  A few quick observations on this discussion:

1. I’m enjoying it immensely, while understanding only some of it, and being completely unable to participate in it.

2. It is taking place on the Internet.

3. It is completely civil and, until this moment, focused on the issues of the discussion and not observation of the discussion itself.

4. There are no cats anywhere in this discussion. Not even Schrödinger’s — the poor thing(s).

5. The convergence of factors 2, 3, and 4 above — a civil discussion on the Internet without the inclusion of cats — seems so highly improbable, involving opposing forces of such strength able to co-exist only in conditions at or immediately following the BB (I can’t do the math, but y’all can do it in your sleep, apparently) that I hereby postulate that this discussion is not actually taking place. Now, please, continue.

Peter:  Thanks, Jeff! I can’t believe you (and at least two others) read this far. I’ve learned a lot over the last couple weeks. Dave, we need to find a publisher. I eagerly await your further thoughts! In the meantime, here’s my bid for Longest Post So Far. Sorry in advance.

> Anyway, I’d much prefer an investigation of objective p-values rather than debating diachronic theories of cosmology. It’s one thing to assign a p-value, and another to interpret it as evidence for design. >

I think part of the issue is that we don’t fully comprehend the magnitude of the improbabilities with which we’re dealing. I think we both understand them fine, at least mathematically. (Yeah, it’s probably impossible to understand them intuitively.) We disagree on what they imply. You say the p-value is so low that our universe couldn’t be a random accident. I say no p-value, no matter how small, could ever make that case by itself: our universe is no more or less likely than any other.

Here’s an illustration. I’ll ask my computer for ten random integers:

RANDOM ; done

1967 3496 15853 19457 29526 16109 16229 15867 14059 223 15303

[Edit: Oops! That was eleven. And that’s why I’m not a professional programmer. ]

That sequence is incredibly unlikely: the p-value is just 1 in 10^45. (In other words, if every person on Earth ran that command a billion times a second, it almost certainly wouldn’t come up again before the sun engulfed our planet.) But that, in itself, gives no reason to suspect it was specially chosen. For us to make that leap, it needs to have some properties that a designer would care about—in terms of our older examples, it would have to be the equivalent of a double canon or a bucket of sixes. In those examples, we recognize canons and high rolls as valuable in the domains of music and gaming. (Manson gives the analogy of poker, where an accusation of cheating is more persuasive if the cheater ends up with a strong hand.) Perhaps there is something special about this sequence, which would be ruined by even a slight change to any number. We still can’t claim that it was specifically chosen without assuming that the chooser also knows and cares about this special quality and would thus be motivated to choose this sequence over any other. So this is what I meant by saying the low probability of our universe doesn’t inherently “demand explanation.” We agree that our universe appears to be uniquely tuned for life and extremely improbable, but we disagree about the next step. In order to argue for design, we have to assume that life is inherently valuable within the domain of universe-creation, just like canons and sixes are in music and dice games. But (as Neil Manson points out) it’s hard to find people who explicitly defend that assumption, probably because it’s a bit embarrassing and not that easy to do without assuming some amount of theology and thus rendering the argument circular. I found one defense by Richard Swinburne.

I haven’t gotten to read it all the way through—the Google Books preview cuts out just as he gets to the multiverse issue—but I’m very curious.

*** AND NOW, Some Remaining Ancillary Quibbles *** (no obligation to discuss this stuff if you’re sick of it)

> And if, according to Guth, every event (that can occur) will occur an infinite number of times, then that’s basically the same thing as saying some event E—actually, every event that could occur!—will occur within each of the infinite universes. < No, it’s totally different. Given infinite universes, even things that only happen in one universe out of a bajillion will still happen an infinite number of times. This makes calculating probabilities really annoying, as Guth says, but it doesn’t mean that everything is equally probable. There’s no contradiction here, and p^n is still irrelevant. > I doubt anyone would accept that as a “proof” of God’s existence, even though it makes perfect logical and mathematical sense, which is, of course, why the ALO equation is problematic at infinite n. < It only makes sense if divine creation is “an event that can happen” according to the laws of physics—in other words, if God is just another agent in the multiverse, subject to the same laws as everything else. I don’t see why anyone should have a problem with that. To really cause a problem for this probability thing, we’d need an event that “can happen” once, but not an infinite number of times, even in different universes. (Note that I’m not saying physics is incompatible with divine creation, only that physics doesn’t explain divine creation.) > Are we to assume that an infinite number of BBs (i.e., quantum fluctuations) begat the infinite number of universes? That’s more difficult to believe than (an infinite number of) fluctuations within our own universe as the catalyst for the multiverse…<

Yeah, I think the first one is right. But what makes it harder to believe than the second? As far as I can tell, the only difference between them is that the second assumes that our universe is the “grandfather” from which all others spring, rather than one of the later generations. That’s a big assumption, and I’m not sure how it could be justified. As for the rate of universe-generation, the exponential-decay model sounds plausible enough to me (that’s what you were using it for, right? I wasn’t sure), though I’d prefer a model more motivated by the actual physical theory. But even if a single universe does gradually lose its ability to create new ones, that doesn’t put an upper bound on the total number of universes out there, given sufficient time. (Think bunnies.) So it doesn’t limit the explanatory ability of eternal inflation. All that said, I actually have no idea what eternal-inflation theory would say about the ur-origin of the grandfather universe. For all I know, it may dispense altogether with the idea of an origin, and just let every universe bubble out from another one, turtles all the way down! Or maybe that’s ludicrous. If only we had some physics-literate friends who were patient enough to wade through these ramblings.

Jim: I’ve followed but steered clear of participating in this conversation. I did want to put out there though that isn’t this something that we ultimately found a philosophical answer to in modernity? If anything, the 57 comments of back and forth reinforce the idea that all we can agree upon is the notion that there’s an innate uncertainty on the subject. It’s like we’re all holding out that we’ll some day find the answer that justifies our own personal belief through science when the only thing science has really taught us is the complexities of the universe(s) in its(their) entirety will always fall beyond the capacity of human reason. Wouldn’t the pursuit of knowledge be bettered if we all called a truce? On Pi day, can’t we all just get along and agree that we’ll never be able to calculate that last digit of infinity? If there’s a God that created our physical realm, clearly he doesn’t intend for us to ever find the end of the rainbow is all I’m saying.

David:  >>I think we both understand [astronomically low p-values] fine, at least mathematically. (Yeah, it’s probably impossible to understand them intuitively.) We disagree on what they imply. You say the p-value is so low that our universe couldn’t be a random accident. I say no p-value, no matter how small, could ever make that case by itself: our universe is no more or less likely than any other.<< Well, considering all the variables involved, I’m saying it’s very, very highly unlikely chance is responsible for the complexities and details of the universe. And I think the fact that it’s so difficult to understand intuitively very low p-values plays an important role in this. Consider this narrative by James Coppedge from “Evolution: Possible or Impossible?”

“The probability of a protein molecule resulting from a chance arrangement of amino acids is 1 in 10^287. A single protein molecule would not be expected to happen by chance more often than once in 10^262 years on the average, and the probability that one protein might occur by random action during the entire history of the earth is less than 1 in 10^252. For a minimum set of the required 239 protein molecules for the smallest theoretical life, the probability is 1 in 10^119,879. It would take 10^119,841 years on the average to get a set of such proteins. That is 10^119,831 times the assumed age of the earth and is a figure with 119,831 zeroes, enough to fill sixty pages of a book this size.” “Take the number of seconds in any considerable period. There are just 60 in a minute, but in an hour that increases to 3,600 seconds. In a year, there are 31,558,000, averaged to allow for leap year. Imagine what a tremendous number of seconds there must have been from the beginning of the universe until now (using 15 billion years…). It may be helpful to pause a moment and consider how great that number must be. When written down, however, it appears to be a small figure: less than 10^18 seconds in the entire history of the universe. The weight of our entire Milky Way galaxy, including all the stars and planets and everything, is said to be ‘of the order of 3 x 1044 grams.’ (A gram is about 1/450th of a pound.) Even the number of atoms in the universe is not impressive at first glance, until we get used to big numbers. It is 5 x 10^78, based on present estimates of the radius at 15 billion light years and a mean density of 1/1030 grams per cubic centimeter. Suppose that each one of those atoms could expand until it was the size of the present universe so that each had 5 x 10^78 atoms of its own. The total atoms in the resulting super-cosmos would be 2.5 x 10^157. By comparison, perhaps the figure for the odds against a single protein forming by chance in earth’s entire history, namely, 10^161, is now a bit more impressive to consider. It is 4,000 times larger than the number of atoms in that super universe we just imagined.”

…and this:

“Imagine an amoeba. This microscopic one-celled animal is something like a thin toy balloon about one-fourth full of water. To travel, it flows or oozes along very slowly. This amoeba is setting forth on a long journey, from one edge of the universe all the way across to the other side. Since the radius of the universe is now speculated by some astronomers to be 15 billion light years, we will use a diameter of double that distance. Let’s assume that the amoeba travels at the rate of one inch a year. A bridge of some sort – say a string – can be imagined on which the amoeba can crawl. Translating the distance into inches, we see that this is approximately 10^28 inches. At the rate of one inch per year, the tiny space traveler can make it across in 10^28 years. The amoeba has a task: to carry one atom across, and come back for another. The object is to transport the mass of the entire universe across the entire diameter of the universe! Each round trip takes 2 x 10^28 years.  To carry all the atoms of the universe across, one at a time, would require the time for one round trip multiplied by the number of atoms in the universe, 5 x 10^78. Multiplying, we get 10^107 years, rounded. That is the length of time for the amoeba to carry the entire universe across, one atom at a time. But wait. The number of years in which we could expect one protein by chance was much larger than that. It was 10^171. If we divide that by the length of time it takes to move one universe by slow amoeba, we arrive at this astounding conclusion: The amoeba could haul 10^64 UNIVERSES across the entire diameter of the known universe during the expected time it would take for one protein to form by chance, [even] under those conditions so favorable to chance. But imagine this. Suppose the amoeba has moved only an inch in all the time that the universe has existed (according to the 15-billion-year estimate). If it continues at that rate to travel an inch every 15 billion years, the number of universes it could carry across those interminable miles is still beyond understanding, namely, more than 6 x 10^53, while one protein is forming.  Sooner or later our minds come to accept the idea that it’s not worth waiting for chance to make a protein. That is true if we consider the science of probability seriously.” I think that helps a bit with our intuition! >>Here’s an illustration. I’ll ask my computer for ten random integers:<<

LOVE this example, but I’m not sure how a random integer string is any different from (essentially) rolling eleven dice. It seems like you’re arguing we should disregard (the import of) very low p-values because (1) very low p-values exist and (2) they’re ubiquitous (i.e., we can find them everywhere; thus, they offer no substantive value as highly improbably events). [Edit: I think these are Leon’s main objections, too.]  If I’m understanding you correctly, your string of integers is a random-but-meaningless event (for us) primarily because it cannot distinguish itself—or, rather, we cannot distinguish it—from any other random string (i.e., its “meaning”). (Let’s assume we wouldn’t get a subset of the Fibonacci sequence or something recognizable or meaningful.) I think that’s what you were saying with respect to a “[property]…a designer would care about.”

So, the question then becomes: How do we assign meaning to p-values—on the order of double canons and buckets of sixes—without appeals to anthropic, fit-for-life arguments? I’ve thought about it, and I just don’t know the answer to that question. I am convinced, though, there is one! It’s clear, for example, that temporal perspective matters—ex post facto vs. a priori quantifications of probability; also, the p-value of a bucket roll means one thing when it represents the mathematics of ANY one of the possible bucket rolls (B), given as \forall x p_x = (1/6)^n but, as I’m sure you would agree, it means another as the probability of a specific constellation of dice (p_i), even though

p_i = p_x = (1/6)^n

because a bucket of sixes, E_6, is an element of the set of all possible dice constellations:

E_6 \in B : |B| = \Gamma(n+f)[\Gamma(f)(n!)]^{-1}

where f is the number of faces on a single die. (This equation provides a cardinality that eliminates all repetitions.) But why can’t we appeal to statistical significance as the “domain” of probability measurements? It seems awkward to suggest the stochastic cosmological p-value—as incredibly and infinitesimally small as that number is—wouldn’t satisfy, say, the 5-sigma measure for rejecting the null hypothesis. Perhaps the formation of protein chains should be equated with Manson’s high poker hand or a bucket of sixes without appealing to anthropic principles. I don’t see that as being unreasonable.

>>No, it’s totally different. Given infinite universes, even things that only happen in one universe out of a bajillion will still happen an infinite number of times. This makes calculating probabilities really annoying, as Guth says, but it doesn’t mean that everything is equally probable. There’s no contradiction here, and p^n is still irrelevant.<< Well, if we’re supposed to exclude all the universes where the event doesn’t occur, then that falls into the “logically easy” category. It’s as if I said I’m going to roll an infinite number of sixes—just ignore any roll that isn’t a six. The only requirement for that logic to work is that I keep rolling the die. The p-success of every event in that scenario equals one (even if the constellation of universes changes for each event); we just exclude the results we don’t like. Of course, we know that p^{\infty \pm m} = p^{\infty} \rightarrow 0, so, in that sense, then, exceptions don’t even really apply. >>It only makes sense if divine creation is “an event that can happen” according to the laws of physics–in other words, if God is just another agent in the multiverse, subject to the same laws as everything else. I don’t see why anyone should have a problem with that. To really cause a problem for this probability thing, we’d need an event that “can happen” once, but not an infinite number of times, even in different universes. (Note that I’m not saying physics is incompatible with divine creation, only that physics doesn’t explain divine creation.)<< I think my “proof” suggests such a singular event. An event that completely alters (or destroys) the universe would also be another example. (I’ve read this is theoretically possible.) Also, I don’t think the Platonic, universe-by-God p-value is influenced in any way by whether or not God is subject to the laws of physics. (If He is, then He must not exist.) Either God did or didn’t create the universe: as a historical probability, clearly p = 0,1. >>Yeah, I think the first one is right. But what makes it harder to believe than the second? As far as I can tell, the only difference between them is that the second assumes that our universe is the “grandfather” from which all others spring, rather than one of the later generations. That’s a big assumption, and I’m not sure how it could be justified.<< Well, it’s more difficult to believe for the same reason a single BB is difficult to believe: quantum fluctuations require (at least a vacuum that requires) space-time, which doesn’t yet emerge until after the BB/fluctuation begins. So, I guess it’s easier to imagine an infinite number of fluctuations within an already-established space-time continuum rather than an infinite number of impossible “space-less” fluctuations emerging outside of space-time. (And nothing in Guth’s article suggests a “space-less” fluctuation.) Consider this quote: “Quantum mechanical fluctuations can produce the cosmos,” said…[physicist] Seth Shostak….”If you would just, in this room,…twist time and space the right way, you might create an entirely new universe. It’s not clear you could get into that universe, but you would create it.” Oddly, Shostak’s claim presupposes both time and space in order to hold. >>As for the rate of universe-generation, the exponential-decay model sounds plausible enough to me (that’s what you were using it for, right? I wasn’t sure), though I’d prefer a model more motivated by the actual physical theory. But even if a single universe does gradually lose its ability to create new ones, that doesn’t put an upper bound on the total number of universes out there, given sufficient time. (Think bunnies.) So it doesn’t limit the explanatory ability of eternal inflation.<<

What if the bunnies were expanding away from each other at cosmological speeds (i.e., 74.3 +/- 2.1km/s/megaparsec), lol?!  (One megaparsec equals roughly 3 million light-years.) Not even bunnies can copulate that quickly, lol. Eventually, each bunny would become completely isolated—the center of its own galaxy- or universe-sized space—where it could no longer procreate and repopulate. So, inflation perforce establishes the rate of “cosmological procreation” as inversely proportional to time.

Peter:  > But why can’t we appeal to statistical significance as the “domain” of probability measurements? It seems awkward to suggest the stochastic cosmological p-value—as incredibly and infinitesimally small as that number is—wouldn’t satisfy, say, the 5-sigma measure for rejecting the null hypothesis. < Okay, if the null hypothesis is, “No other universes exist, and the cosmological parameters were pulled at random from any of the [whatever huge number] possibilities,” then yeah, that’s probably safe to reject. But in rejecting that, we’re still nowhere near an affirmative argument for design. There are plausible alternatives to both prongs of the hypothesis, both of which have been active areas of research for decades: to the first, an account of universe-generation in which our sort of universe is more likely than others (as Sean Carroll describes in the piece you linked); to the second, multiverse theories like eternal inflation. This is the familiar course of “God of the Gaps” arguments. They present a false choice between materialist and theist explanation, and paint God into an ever-diminishing corner: if the proof of the divine rests on what we don’t understand, then what happens when we understand it? I’m much more sympathetic to the Spinozist (I think?) thought that credits God for the astonishing regularity of the universe. (Side note: Coppedge gets the math all wrong… but that deserves its own thread, and is well documented elsewhere in any case.) > Perhaps the formation of protein chains should be equated with Manson’s high poker hand or a bucket of sixes without appealing to anthropic principles. I don’t see that as being unreasonable. < But the thing is, even OUR universe is practically devoid of protein chains. If you look at our universe as a whole, why would protein chains be its most important feature? And is there no other possible universe in which protein chains could be more common than in ours? Even if there’s not, can we assume that ANY being capable of universe-creation would necessarily prioritize protein chains above all other arrangements of matter and energy, under any possible set of physical laws? That’s what the design argument requires, and I don’t see how it can be justified. (Though this is basically what Richard Swinburne attempts in that chapter I linked a couple weeks ago, albeit with humans instead of protein chains.) Anyway, one thing I’ve really enjoyed about this discussion is that both sides counsel humility: “Science can’t explain everything” vs. “We’re not the most important thing in the universe.” > So, inflation perforce establishes the rate of “cosmological procreation” as inversely proportional to time. <

Um, I think the bunny metaphor may have gotten away from us. Universes don’t copulate, and even if universe-creation slows over time within a single universe (a claim that requires a LOT more physics than you and I know), that still wouldn’t limit the number of universes. That’s what I meant to illustrate with bunnies—they get old and die, but their children keep reproducing. I suspect your intuition stems from conservation laws—if there’s only a finite amount of stuff, then it’s going to slow down as it spreads out—but I think that intuition may be mistaken here. Universe-creation doesn’t need to be conservative if they are isolated from each other (i.e., “We can’t get in”). And as long as universes generate more than one baby bubble universe on average, the process tends toward infinity. I’m guessing your next question will be “If the baby universes aren’t inside the parents, then where are they?” [edit: oops—that’s referring to a sentence I deleted! In short, I suspect it’s wrong to think of baby universes as contained within their parents.] Honestly, I haven’t a clue—but I’m guessing it’s the wrong question to ask. Going out on a limb, I’d guess that the path between universes is neither spacelike nor timelike (in the relativistic sense), and it’s kind of meaningless to try and specify “which dimensions” are involved. Suffice it to say they’re isolated from each other.

[END]

Standard
ECONOMICS, LAW, PHILOSOPHY, POLITICS

Why I Am Not a Libertarian

“Decisions concerning private property and associations should in a free society be unhindered. As a consequence, some associations will discriminate….A free society will abide unofficial, private discrimination—even when that means allowing hate-filled groups to exclude people based on the color of their skin.” ~ Rand Paul (letter to the editor of the Bowling Green Daily News—May 2002)

Let’s be clear: Rand Paul thinks business owners should be able to discriminate (i.e., “install racist policies”) against minorities in the name of private-property rights. Such discrimination represents, for Paul, a certain level of acceptable noise within the libertarian system. In a hand-waving defense of libertarianism, a friend suggested we cannot “legislate morality.” So true. With respect to secular government—as opposed to, say, a theocracy—natural law is perforce reduced to a set of accepted coeval standards. So, no, we can’t (constitutionally) legislate morality—although the history of constitutional amendments and the rulings of the SCOTUS, to a certain degree, suggest something of a moral touchstone—but we can legislate against premeditated, reckless, or immoral behaviors that negatively impact others. Refusing to serve African Americans within one’s privately-owned establishment, for example, is hardly synonymous with refusing to have an eclectic group of friends with whom you associate, and wielding the Constitution to defend exclusively white lunch counters as a defense against generic racist policies even fails an objective logic test (i.e., ignoratio elenchi).

Because we cannot convince people to “live morally” with respect to others (i.e., the notion that choosing the moral path is, ipso facto, the reward for moral decision-making) is the reason we need legislation, “red lines,” to use Netanyahu’s familiar phrase, that regulate such behavior. Are you free to commit murder? Drive drunk? Run a stop sign? Cheat on your tax returns? Beat your children? Assist a suicide (in some states)? Bet against doomed investments actively sold to consumers? Sure you are, but the empowerment that emerges from individual autonomy will never stand as a justification for immorality. This is why we have laws in this country, a legislative feature of our democracy that implies the moral sentence of ignominy is simply insufficient to dissuade free moral agents from engaging in harmful behavior. If Goldman Sachs can generate 500 billion dollars by screwing a number of gullible clients, they will. And the shame they might feel as a result of their immoral-but-legal behavior—if they felt any at all—would be quickly washed away by the euphoric wave of, well, 500 billion dollars. This is when government needs to outline clear legislative restrictions.

Even in a more general sense, though, there’s something profoundly disturbing about the entire libertarian project, something akin to the infantile absurdity of “object permanence” writ large upon the socio-macroeconomic landscape. Libertarians wish to eliminate the conscious recognition of significant social and economic inequalities by placing severely opaque, ambiguous, and patriotic-looking objects—things like “individual liberty,” “property,” “private ownership,” and, worse, constitutional “originalism”—in front of, as it were, the objects that should truly demand our collective attention: poverty, the insidiousness of plutocratic rule, wage stagnation, corporate avarice, discrimination, economic and opportunistic inequality, and the insufficiency of our secondary educational system. Put a different way, libertarianism is that Brobdingnagian picture of some Parisian locale—the kind of deceptive photomontage used by, say, inner-city street vendors—where visitors can pretend their fatuous photographic moment temporarily transports them to a different reality, a vision far more pleasant than the project-ridden dystopia conveniently hidden behind the photo.

Such a tactic is even more embarrassing than the historically disingenuous attempts to distract us with those impossibly shiny objects—notions of the “American Dream” and “economic mobility”—that are waved at us by the trickle-down one-percenters, as if capitalizing the ‘a’ and ‘d’ increases the viability of such an ideal for the vast majority of Americans within the current economic infrastructure. Begin a discussion concerning disadvantaged families living in depressed areas without any real opportunity for economic mobility—an issue directly related to the sizable (and exponential) income-inequality gap—and the libertarian response is always the same, tired refrain:

“No one is forcing them to live there!”
“Other people have escaped, so why can’t they?!”
“They’re struggling because they haven’t taken responsibility for their lives!”

(That last statement sounds awfully familiar, Mr. Romney.) Such unacceptably tenuous, eyebrow-raising “arguments” really suggest an “object permanence” metaphor. If we shift our focus away from inequality and inequality of opportunity toward notions of “self-empowerment,” “freedom,” and “God-given autonomy,” as the libertarian project would have us do, then we replace genuine causation (imposed inequality) with a specious one (i.e., liberty—read: moral bankruptcy that results in substandard living as a function of one’s free, non-determined choices). That slight-of-hand moment, my friends, the moment where we replace the real cause of inequality with a weakly-constructed one, represents the seamy underbelly of the libertarian project, an ideology clothed in patriotic garb and painted with roll-up-your-sleeves, red-white-and-blue-sounding slogans that cleverly evoke the American machismo of “manifest destiny.”

This does not, however, represent the worst of libertarianism. The evil of the libertarian experiment resides not only in its desire to subversively enact a sort of bait-and-switch morality on the American people, but also because it models, in a number of abstract ways, the non-genocidal dangers of the National Socialist experiment: a desire to institutionalize public racism and discrimination under the guise of state-facilitated ownership; the individual-as-totalitarian state; rejection of general social contracts in the name of a fascist sense of “liberty”; support for economic internment camps, which replace barbed fences with economic immobility; and an economic “master race” that is fitter—in a fiscally eugenic sense—than the less fortunate and less educated. An insidious project must begin, as it always does, with a popular-yet-specious allure. Libertarianism has chosen the buzzword “liberty.”

To be sure, it is the principal desire of the libertarian project to effectuate a cognitive denial of the true cause(s) of inequality, and libertarianism secures this by suggesting its weakly-constructed alternative (free choice) is the real cause. That is, by effacing genuine causes of inequality, libertarianism is able to substitute its own prescriptions for inequality (e.g., indolence, lack of an entrepreneurial spirit, entitlement mindset, socialism, poor decision-making, etc.). In this way, libertarians have found a way to reject the very existence of inequality itself—and here is the important part—by claiming its presence within society is nothing more than a manifestation (and, with respect to the penury, an agglomeration) of individual decisions to be poor and disadvantaged. In other words, if inequality is simply a “decision to be unequal,” then even (the visual evidence of) inequality can be dismissed by the very libertarian dogma (i.e., free will) that reifies it.

How convenient.

And when libertarians respond by arguing that inequality IS, in fact, a product of free choice, then that statement, ipso facto, represents the vindication of my argument; it becomes the very evidence that libertarians—like ignorant viewers flipping through someone’s old vacation photos—believe the faux reality of the Parisian photomontage means we’re really in Paris. That is the immorality—the unshirted evil—of libertarianism, that the “solution” to the problem of inequity merely resides in its cause: an individual’s free will.

Standard