Class Note
Class Note
The Mega Society was founded by Dr. Ronald K. Hoeflin in 1982. The 606 Society (6 in 10⁶),
founded by Christopher Harding, was incorporated into the new society and those with IQ
scores on the Langdon Adult Intelligence Test (LAIT) of 173 or more were also invited to join.
(The LAIT qualifying score was subsequently raised to 175; official scoring of the LAIT
terminated at the end of 1993, after the test was compromised). A number of different tests
were accepted by 606 and during the first few years of the Mega Society’s existence. Later, the
LAIT and Dr. Hoeflin’s Mega Test became the sole official entrance tests, by majority vote of the
membership. Then, Dr. Hoeflin’s Titan Test was added. (The Mega Test and Titan Test were
also compromised, so Mega Test scores after 1994 and Titan Test scores after August 31st,
2020 are currently not accepted; the Mega and Titan cutoff is 43 - but either the LAIT cutoff or
the cutoff on Dr. Hoeflin’s tests will need to be changed, as they are not equivalent.) The Mega
Society now accepts qualifying scores on The Hoeflin Power Test and on The Ultra Test. Both
tests are still being scored. The Mega Society publishes this irregularly-timed journal. The
society also has a (low-traffic) members-only email list. Mega members, please contact one of
the Mega Society officers to be added to the list.
For more background on Mega, please refer to Darryl Miyaguchi’s “A Short (and Bloody) History
of the High-IQ Societies” —
http://miyaguchi.4sigma.org/BloodyHistory/history.html
http://www.megasociety.org/
Noesis is the journal of the Mega Society, an organization whose members are selected by
means of high-range intelligence tests.
Brian Wiksell (P.O. Box 366, Solana Beach, CA 92075) is the Administrator of the Mega
Society. Inquiries regarding membership should be directed to him at the aforementioned P.O.
box or the following email address: [email protected]
Opinions expressed in these pages are those of individuals, not of Noesis or the Mega Society.
© 2020 by the Mega Society. Copyright for each individual contribution is retained by the author
unless otherwise indicated.
Much has happened in the Mega Society since the publication of Noesis #205 in August of last
year. After holding three separate elections for Administrator, Internet Officer, and Editor, there
are new Mega Society officers in all three positions: Administrator, Internet Officer, and Editor.
You may remember that former Editor Kevin Langdon announced a call for volunteers for the
role of Administrator in Noesis #205. That call was answered by new Administrator Brian
Wiksell. Thank you, Brian. Jeff Ward (Administrator from 1982-2019) was forced to resign for
medical reasons last year. Let’s hope that Jeff is doing well and give him our thanks for
untiringly serving as Administrator since the Mega Society’s founding by Ron Hoeflin in 1982.
Former Internet Officer Chris Cole (Internet Officer from 2005-2019) decided to step down as
Internet Officer late last year. As many Mega members have rightly acknowledged, Chris Cole
allowed the Mega Society to run online smoothly for many years and created a genuinely
welcoming atmosphere. Thank you, Chris. Shortly after Chris’s decision to step down as Internet
Officer, Dan Shea volunteered and was elected Internet Officer. Upon getting elected, Internet
Officer Dan Shea stepped up in a big way by doing the following: migrating the Yahoo! Groups
messages, which stretch back two decades, to a new platform; creating a “responsive” new
website layout (https://megasociety.org/); and, embarking on a journal restoration effort. Journal
issues from The Titan Society (Insight, Titanic and Titania) and The 606 Society (Circle) along
with previous issues of Noesis may be found here: https://megasociety.org/#noesis. Thank you
for all of your hard work, Dan, and to all that helped!
Kevin Langdon (Editor from 2005-2020) officially began serving as Editor with Noesis # 176,
published in February of 2005. After many months of unsuccessful attempts to elicit any
response from Kevin by phone or email earlier this year, an election for Editor was announced
by Administrator Brian Wiksell. Two candidates received a majority of votes for Editor - Richard
May and Ken Shea. Therefore - pursuant to Article IV, Section 12 of the Mega Society
Constitution - both Richard May and Ken Shea will now serve as Editor. The current Editors
would like to thank Kevin Langdon for serving as Editor of Noesis for more than a decade! Kevin
Langdon edited and directly contributed to some of the most interesting discussions in Noesis.
Kevin Langdon announced a Letters to the Editor column in Noesis #205 (“something that I
intend to continue, with the help of Noesis readers” -Kevin Langdon). Both Editor Richard May
and Editor Ken Shea would like to keep this Letters to the Editor column going as well. It is our
feeling that a Letters to the Editor column will foster a more dynamic and interesting Noesis.
Accordingly, readers may send their letters to the following email addresses -
[email protected] or [email protected].
A majority of Mega Society members have voted to retire The Titan Test and accept qualifying
scores on The Hoeflin Power Test and on The Ultra Test. The Hoeflin Power Test and The Ultra
Test are still being scored and will now be used as admissions tests for the Mega Society.
Next, Bob Williams kicks off a sequence of three Noesis submissions on g and intelligence. Bob
Williams’s “The Tools of Intelligence” explores whether mental chronometry and brain imaging
can gauge intelligence as well or better than IQ tests. Could these novel approaches eventually
displace or supplement IQ tests? Might that day be soon?
After that, researcher David Redvaldsen subjects Ron Hoeflin’s Mega Test and Titan Test to a
psychometric analysis in “Do the Mega and Titan Test Yield Accurate Results? An Investigation
Into Two Experimental Intelligence Tests.” David Redvaldsen provides his own norms for these
high-range tests and assesses whether each test truly taps the one-in-a-million level.
Then, Mega Society founder and test creator Ron Hoeflin responds to David Redvaldsen’s
investigation and explains how he approached norming the Mega Test and Titan Test back in
the Omni magazine days. Ron includes an update on his Encyclopedia of Categories.
If the previous three contributions could be said to concern g and intelligence, then the next four
contributions might be said to wrestle with different aspects of philosophy.
Ken Shea’s paper “On the Potential Epistemic Invalidy of Phenomenological Accounts”
examines issues swirling around consciousness research, particularly the neural correlates of
consciousness, and whether consciousness can be explained within a physicalist framework.
Adam Kisby, a member of the Omega Society, then makes the case for scrutinizing the concept
of testability in a philosophy of science piece titled “Testing Testability,” which examines the
Principle of Testability through its manifold expressions via verifiability and falsifiability.
(Readers intrigued by Adam’s ideas may relish “Doubting Doubt,” a similarly thought-provoking
piece published in Noesis #197.)
Next, Rick Rosner provides answers to Scott Douglas Jacobsen’s intelligent, wide-ranging
questions in the final part of an interview series. Interview themes include: Rick’s take on deep
time, cosmology, ethics, consciousness, and artificial intelligence throughout the 21st century.
If a few brain-teasers sound good at this point, then the reader may cheer to learn that Ron
Yannone shares his long-time delight in “Litton Industries’ Problematical Recreations” in addition
to an eclectic array of problematical recreations to try one’s hand at.
Noesis Editor Richard May, also, has released a new book, Stains Upon the Silence: Something
for No One! Readers are treated to two forewords, a preface by author Richard May, a.k.a.,
May-Tzu to Noesis readers, and an afterword by Adam Kisby.
At this point, Adam Kisby curates his Exceptionally Intelligent Individuals’ Extraordinary Ideas
Index (EIIEII). Step right up and determine whether you have any extraordinary ideas!
Ken Shea, then, compares the metaphysics and ethics of Arthur Schopenhauer to the work of
moral philosopher David Benatar in an essay titled “Arthur Schopenhauer’s and David Benatar’s
Contributions to Philosophy.”
Finally, Richard May, a.k.a., May-Tzu, rounds out Noesis #206 by serving up three piquant
dishes: “Transontological,” “Physics as Erotica,” and, “The Immortality of Zeno of Elea.”
Readers are invited to click on the title of a particular contribution on the following two pages
(i.e., pages six and seven) to skip ahead to the selected Noesis contribution.
Please submit material for the next issue of Noesis, tentatively planned for February 2021.
Chris Cole
As recently as the 1980s, physicists routinely referred to printed journals and textbooks to find
the solutions for various mathematical problems. Frequently this was a tedious process - but
that was the way physicists had always worked. What physics needed was a platform that
contained all this mathematics in a consistent language and notation. Hence a platform was
born for solving mathematical expressions: Mathematica, which became the ubiquitous and
indispensable software tool used by scientists worldwide. Notably, Mathematica f orever
changed the way physics is taught to university students in their graduate studies.
Much of the necessary mathematics for scientists previously had been collected, often
painstakingly, into books, such as Handbook of Mathematical Functions (Abramowitz and
Stegun) and Table of Integrals, Series, and Products (Gradshteyn, Ryzhik, et al.). Mathematica
incorporated the insights of generations of mathematicians to make computation broadly
accessible. This revolution in scientific computation occurred just as the Internet similarly
changed the world of science and commerce. Now the production of publication-ready text
using mathematical notation and graphics is quite standard. The remarkable combination of
Mathematica a nd the Internet did not merely improve efficiency of scientists; rather it aided
science in gaining profound insights into the physical world.
Today, research in physics is routinely performed using symbolic mathematics, particularly with
visualization via computer graphics. To paraphrase Isaac Newton, in his famous observation
about himself, if we see farther it is because we stand on the shoulders of giants.
Currently the world is suffering from an historic pandemic with an uncertain outcome. A global
effort is underway to find treatments for COVID-19, including tests, therapies and vaccines. At
best, there will be a year or so of suffering before the pandemic is brought under control. At
worst, the virus may be with humanity for decades.
Human biology consists of about a hundred thousand proteins that interact with each other in
various ways. These proteins are encoded in three billion base pairs of DNA that are shared by
all people. The transcription and expression of these genes via RNA is understood. The gene
regulatory networks that control this expression can be “nested” so that the overall biology of the
human being is akin to a computer operating system. It operates on several scales at once.
Many of the details of human biology and the novel coronavirus are quite poorly understood.
Hence science has to start from scratch to understand this pandemic - and probably the next
one.
We are vulnerable because we have not organized the basic biological knowledge of how the
human being works. Science needs a platform that encodes this knowledge of human biology in
a self-consistent and computable way. There presently is no one way to do this, just as there
was in physics before Mathematica. The issue isn’t that a biology platform needs to be the best
possible one; the issue is that some platform needs to exist now.
[Editor’s Note: The Centers for Disease Control and Prevention say there are four main
subgroups of coronaviruses (alpha, beta, gamma, delta) and three other kinds of coronaviruses:
MERS-CoV, SARS-CoV, and SARS-CoV-2.
https://www.cdc.gov/coronavirus/types.html
SARS-CoV-2 is the novel coronavirus that causes the disease COVID-19. The novel
coronavirus SARS-CoV-2 got its name on February 11th, 2020 because of genetic similarities to
SARS-CoV, which caused the SARS outbreak in 2003. SARS stands for severe acute
respiratory syndrome.
COVID-19 symptoms like fever, cough, and shortness of breath are common. Symptoms may
take 2-14 days to appear after exposure (the incubation period), with five days being the
average time to show symptoms after exposure to the virus.
https://www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/symptoms.html
The World Health Organization declared COVID-19 a pandemic on March 11th, and two days
later a national emergency was declared in the United States. There have been tens of millions
of confirmed cases worldwide and millions of confirmed cases in the United States.
As an approved vaccine has not been widely adopted yet and COVID-19 is spread
person-to-person, precautions should be taken to avoid respiratory droplets from those infected.
COVID-19 has now achieved community spread, which means many are impacted in an area,
potentially without being aware of infection. Before (and after) a vaccine is developed,
precautions should be taken.]
Bob Williams
The following is a tour through the various methods that have been devised and used to
uncover the bits and pieces of insight that make up the present-day scientific understanding of
human cognition and its differences among people. The point of this exercise is to identify tools
and relationships that are not as well known as the ubiquitous IQ test.
Attempts to understand intelligence go back at least to Sir Francis Galton [1822-1911], who
noted the heritability of intelligence, its difference between various populations, and its relation
to physically measurable tasks. Following Galton, Charles Spearman contributed new statistical
methods, insightful test designs, models of intelligence, and, most importantly, his 1904
discovery of g (also referred to as Spearman's g, psychometric g, the general factor, and g).
Over the course of the next few decades, g languished, while IQ tests were developed, studied,
and refined to a point of high reliability and low bias. Numerous well-known researchers
contributed models, tests, and understanding that were mostly based on the correlations
between test scores and external factors (behavior, physiology, and life outcomes). It was not
until Arthur Jensen began to explain the central nature of g that intelligence research shifted
from earlier models to converge on g theory. Today, it is difficult to find a research paper that is
not about, or constructed from, g theory.
Conventional Tests
Although we are all familiar with some forms of IQ tests, they vary greatly and are designed for a
variety of applications. Testing can be done over an age range from toddler to very old. At the
young end of this range is the test methodology developed by J. Fagan based on selective
attention to novelty (the time toddlers spent looking at new versus familiar faces). His method
was predictive of adult IQ (r = 0.59) and adult educational attainment (r = 0.53). The
Woodcock-Johnson is one of the broad ability tests that measures a specific number of abilities
so that the traditional second-order factors [so-called “group” factors -Ed. Note] of the
Cattell-Horn Carroll model will emerge; it claims to measure from age 2 to over 90. The
Wechsler, various forms, is also a broad-based test, based on the CHC model, and is
considered to be the gold standard (95 percent reliability) by many researchers.
A number of special-purpose IQ-test types have been developed. Some can be given orally to
individuals who cannot write (as in an accident victim). Some are designed for speed of
administration, taking only a few minutes. These latter group of IQ tests sacrifice range and
accuracy for speed and are well suited when a coarse sorting is desired. The Wechsler
Abbreviated Scale of lntelligence (WASI) is a well-known example of a test that has been
shortened from its full form to achieve this objective. [The WASI is composed of two very highly
g-loaded subtests (viz., vocabulary and matrix reasoning) as well as the similarities and block
design subtests, rendering administration much speedier. A simple vocabulary test may be one
of the most effective de facto IQ tests one could give in around ten minutes. Remember that
cultural bias is an empirical question, and cultural bias is orthogonal to cultural load. Cf. Bias in
Mental Testing -Ed. Note]
As most people have discovered, they are likely to score differently on different tests.This is
largely due to uniqueness variance. IQ tests give reasonably close agreement of the latent
factor g (when it can be computed), but the tests differ in content designed to produce broad
ability factors and items that are either specific to the test, or due to random error. Specificity
can result from content that is known to the testee (learned material) or is otherwise unique to
the test. When a person is trained to take a category of test (teaching to the test), the specificity
variance increases, thereby causing the g loading of the test to be somewhat lower.
The thing that ties IQ and other ability tests together is known as the positive manifold, which is
the strong tendency of a person to score at a similar level on tests of largely unrelated abilities,
such as vocabulary and block design. Spearman observed this and created the principle known
as the indifference of the indicator, which was intended to point to the universal nature of g as a
general ability that appears in all cognitive abilities. Ergo, any test of cognitive ability is
predictive of g, and all such tests are predictive of the same g (meaning that there are not
Various tests of working memory capacity require the testee to retain representations, while
performing tasks that make demands on working memory. He may be given a list of words or
letters to remember, separated by a simple task, such as 3 + 5 = 7 (choose yes or no). Then he
is asked to recall the list from memory. People are typically able to retain only a small number of
representations (4 to 9) in working memory. The simple intermediate math operation effectively
flushed out some of the working memory that was used to store the list of memory items. While
this category of test is used as a subtest in some IQ tests [Editor’s Note: e.g, Working Memory
Index on WAIS.], it is also used as a stand-alone tool when working memory is being studied.
There are numerous other similar tools that are used for similar purposes.
One of the most interesting special-category tests is the Stroop Color-Word Test. While the test
has three parts, it is the third one that demonstrates the Stroop effect. The testee is shown a list
of typed color names, but each is printed in a different color ink than the name of the word,
(RED is printed with blue ink, etc.). The testee is asked to name, as quickly as possible, only the
color of the ink in which each word is printed, while ignoring the name indicated by the printed
word.
Here is what happens (from Jensen, 2006, Clocking the Mind: Mental Chronometry and
Individual Differences) : "Some individuals are so frustrated by the task requirement that they
The purpose of the test is to measure the executive function or attention (ability to avoid
distraction from a task). Research along these lines has linked the executive function, attention,
working memory, and g. The details of their interdependence are not fully resolved, but they
clearly share cognitive resources.
The conventional tests, touched on above, are done with paper and pencil, a computer screen
(acting as paper and pencil), or orally. These tests have been used for a majority of the studies
of human cognitive abilities. They work and they can be altered to suit the specific mental
process that is being studied. Most of them share one significant disadvantage: the tests cannot
be scored on a true ratio scale (as is done with most physical measurements, such as force,
voltage, mass, etc.). Instead, they have to be scored relative to a selected group of people.
In IQ tests, this is the norming group, and the test is scored by determining the z-score relative
to the norming group distribution (IQ = [15 X z score] + 100). The resulting scores are a
reasonable approximation of an equal interval scale (as used in the Fahrenheit and Celsius
scales).
When physical measurements are used in intelligence research, the results are given on a true
ratio scale, such as time, distance, volume, etc. It turns out that a great many of the things that
can be measured by instrumentation (including clocks) are linked to IQ test scores and g.
This measurement is usually done with a Jensen Box and consists of a home button (at the
bottom center in the diagram), that the testee holds down, and various target buttons. When the
testee sees the stimulus, such as one of the buttons being illuminated, he releases the home
button and presses the target button.
Reaction time (RT) is measured from the onset of the stimulus to the release of the home
button; the time from the release of the home button to the pressing of the target button is the
Galton performed RT measurements from 1884 to 1893, using a pendulum for the time
measurement. His data has been compared to more recent RT studies; it shows that RTs have
increased, suggesting a dysgenic effect (explored in detail by M. Woodley).
Another widely used chronometric measurement is based on the shortest time that a person can
recognize a change in the shape of a projected image. The standard image is somewhat like the
letter pi (two vertical lines connected at the top). A cue is given to signal that the test is starting,
then the test image is displayed, with one of the vertical lines shortened, then masked. The
testee is asked which vertical line of the test image was longer. As the display time is reduced, a
point is reached where the testee cannot reliably determine which line was longer. The testee's
inspection time is the point where he can achieve an accuracy of 97.5 percent. Again, there is a
negative correlation (r = -0.54) between the speed of perceptual discrimination and IQ.
One of the important contributions made by IT was a study by T. Nettelbeck et al. that related to
the Flynn Effect. He performed IT measurements for school children from the same school,
using the same equipment.
The two sets of data were separated by 20 years. He also administered the same IQ test for the
two groups. The expected IQ gain (Flynn Effect) was seen for the test scores, but the IT
measurements were essentially identical, thus strongly suggesting that the test score gains
were hollow with respect to g. I had the opportunity to ask him if there had been any changes in
apparent SES, nutrition, or other discernible factors. He said that there was none, and the
children were from the same community, school, etc. [Editor’s Note: This finding is fascinating
IT tests have traditionally been performed by means of a tachistoscope. It has a shutter and can
project an image for a precise duration. When computer monitors were first tried for this task,
the results were not reliable because of screen characteristics that allowed some people to read
screen artifacts. With modern, very fast computer screens this problem has been solved.
Electroencephalography (EEG)
EEG has been widely used for medical diagnostics for head injuries, tumors, infections, and
other disorders that relate to the nervous system. The measurements detect electrical activity in
the brain by means of electrodes placed on the scalp; these are typically amplified and recorded
on moving paper (creating traces). [Editor’s Note: Both EEG and MEG signals are possible
because of the electromagnetic laws described by Maxwell’s equations, e.g., electrical currents
produce an orthogonal magnetic field.] At one time, a good bit of intelligence research was
carried out using EEG, but the number of papers reporting it has declined as newer
measurement options have appeared.
Depicted above: Ionic current flowing in dendrites, producing an orthogonal magnetic field
The magnetic field thus produced is reflected in EEG and MEG readings
A primary focus of interest in EEG has been in the traces made following a specific stimulus.
Since the traces contain large amounts of noise, they are repeated many times and averaged to
E. W. P. Schafer reported index methods that are based on the amplitudes of the AEP when
the stimulus is related to neural adaptability and habituation (see: The g Factor for details of the
procedures). These methods resulted in correlations as high as r = +0.82 with IQ tests.
Although this methodology did not develop a following by other researchers, it demonstrates
that g is closely related to the electrophysiological activity in the brain.
Intelligence (g) is correlated with numerous other biological parameters that can be measured.
(Cerebral glucose metabolism is one such measure and will be discussed later.) Nerve
conduction velocity (NCV) is inherently related to the speed and efficiency of cognitive activity.
NCV has been measured directly in the brain and in peripheral parts of the body. Peripheral
measurements (for example, finger to wrist, and wrist to elbow) of NCV correlate with g in the
range r = +0.41 to +0.46. Although most of these peripheral studies have produced the
expected result, some have not, and at least one showed opposite results in men (r +0.63) and
women (r = -0.55).
One of the most well-known of these physical measures is brain volume, which correlates
positively with intelligence. Before brain imaging technology appeared, brain volume had to be
measured by weighing a cadaver brain, or by estimating its volume from the skull volume (taken
as the volume of lead shot or mustard seed that it will hold). Another indirect method of
measurement is to take the head circumference or multiple measures of length and width to
estimate the volume. While head measurements correlate at only r = +0.20 with g, the
correlation is robust and has been repeated many times with large studies. One of the
unexpectedly interesting papers that I have heard presented was Ian Deary's calculation of the
Brain Imaging
Brain imaging technology is to the study of intelligence as the Hubble telescope has been to
cosmology. Imaging has appeared in several stages, and each has opened new paths of study
and huge gains in the understanding of intelligence.
PET can be used to create images of the brain and various other organs. The thing that is seen
as an image is the accumulation of a radioactive tracer (oxygen-15, fluorine- 18, carbon-11, or
nitrogen-13) as the tracer is concentrated by the action of the organ being studied. As the tracer
decays, it emits a positron, which collides with a nearby electron and causes the emission of
two photons. The photons are detected externally.
In the case of brain imaging, the image is effectively an integral of glucose uptake rate. The
tracer used is fluorodeoxyglucose, which gives a time resolution of about 32 minutes. Thus, the
image produced when a person is asked to perform a cognitive task is an integral over a time
span of 32 minutes. The first use of PET to study intelligence was done by Richard Haier
(presently editor of the journal Intelligence) in 1987. At that time, the cost of a single scan was
$2,500. Haier financed the initial work by agreeing to do medical scans in trade for some
research scans. His first subjects were given the RAPM (Raven's Advanced Progressive
Matrices) during the exam. Raw test scores ranged from 11 to 33 (out of a possible 36).
Haier also looked at the effect of learning, using the game Tetris. [Editor’s Note: Mega Society
qualifier and mathematician Solomon W. Golomb’s game of pentomino directly inspired Tetris.]
Several subjects were given practice sessions with the game (new at that time). They had not
seen the game before and were restricted to uniform practice sessions. They improved their
play score by a factor of 7. PET scans before and after the learning sessions showed significant
reductions in brain activity in some parts of the brain. Haier wrote: "We concluded that with
practice and improved performance, subjects learn what areas of the brain not to use, and this
results in GMR (glucose metabolic rate) decreases."
PET studies showed the value of being able to measure actual brain activity while subjects were
performing mental tasks. The technology was expensive and had the slow 32-minute temporal
resolution, so it was displaced when faster, MRI-based machines arrived.
The first MRI was performed on a human in 1977. The machines are based on the use of a very
strong magnetic field (5,000 to 20,000 gauss; the earth's magnetic field measures 0.5 gauss)
that is achieved by means of a superconducting magnet. A few years ago, R. Haier told me that
there was an MRI machine that used a magnetic field that was significantly higher (ten times, as
I recall) than other machines. He said some people complained of headaches and that the brain
was warmed - probably causing the headaches. (A recent literature search shows that possibly
even more powerful, new MRI scanners have been built. The reason for increasing the
magnetic field strength is that it enables the voxel size to be reduced from 1 mm to 0.1 mm.)
A radio frequency wave is added and is pulsed on and off, causing the nuclei to snap out of
alignment and then back in. This shifting of nuclei alignment causes a weak energy release
(also a radio frequency wave), which can be detected by the MRI machine (via receiver coils
that act as aerials) and used to create an MR image.
This basic technology (the same as many have experienced in a medical setting) can be varied
to allow various specialized forms of imaging. The most basic application for intelligence
research is structural MRI, or sMRI. This is essentially a snapshot of the brain, but the image is
3D. It can be rotated and viewed from any angle and can produce a "slice" image of the brain at
any depth. Since the image is in 3D, the points are also 3D, unlike the 2D pixels of a digital
photograph. The 3D representations are known as voxels.
One of the problems encountered in understanding a brain image is that brains are not identical
in size and shape. Yes, they are all generally the same in appearance, just as our faces are
similar yet different enough that we can recognize them. A researcher must be able to compare
brains, despite their differences. This can be accomplished by a computer using a process
known as voxel-based morphometry. The process morphs the MRI data to fit a standard form
and smooths the results so that they can be analyzed. For example, an area of great interest is
cortical thickness. In order to study it and to compare different brains, the cortex representation
has to be smoothed so that the folds are removed and the resulting artificial image retains the
dimensions that are of interest, while losing the irregularities that would otherwise make it
unmanageable.
Above: Left image (axial view) and right image (sagittal view) of structural MRI
One finding is that cortical thickness increases in early childhood, then begins a slow decrease
around ages 7 to 10 years. When plotted against time, the trajectories of bright children (from
longitudinal NIH data) show greater thickness at every age than for less bright children. During
the first phase, thickness increases more rapidly in bright children, but exhibits a similar rate of
thinning following the peak. This has obviously important significance in the verification of the
high heritability of intelligence; the trajectories are set from early childhood. The strongest
correlations between CT and IQ are found in the age range of 8 to 12 years.
The figure (below) of CT for different intelligence groups shows that there are differences and
that they vary as a function of age. The illustrations of CT as a function of intelligence at the
bottom of the figure also show how a brain appears after computer smoothing.
When the thicknesses of specific locations are correlated against IQ, the results are different for
men and women (a surprise to Haier and his team). The highest correlations (gray matter
regions) in men were found in posterior regions, especially those related to visual-spatial
processing. In women, the IQ-to-thickness correlation was almost entirely limited to the frontal
lobes, especially in the language area (Broca's Area). Findings that show sex differences have
been frequent, and each strongly suggests the need to keep male and female data separate.
Haier made this point to the International Society for Intelligence Research (ISIR) conference in
2006.
MRI can be used to create images based on molecules containing iron, which is highly sensitive
to the intense magnetic fields of MRI machines. Hemoglobin in red blood cells contains iron and
iron molecules, thus connecting the fMRI images to blood flow in the brain. When a brain region
is cognitively active, it will have greater blood flow, and this will be seen by the fMRI scan. The
fMRI process is fast, with thousands of images per second and a net resolution that is a span of
about 1 second.
One of the applications for fMRI is the study of functional connectivity. When static
measurements are made, the information conveyed relates to the function of a given brain
region (functional segregation). But as imaging research progressed, brain regions were found
to work together, such that a single region is necessarily involved in multiple functions. With
fMRI, it is possible to see the connected activities of brain regions.
Using fMRI, it is possible to observe the brain performing a task over a period of time. Various
regions show activity (increased blood flow) sequentially, as the brain deals with the task. In a
conversation with R. Haier, he mentioned to me that fMRI data were proving to be difficult to use
because of the large differences seen between individuals. This is not a problem with static
imaging techniques, such as fMRI and diffusion tensor imaging.
DTI is a different form of structural MRI. It is optimized to image the water content of white
matter. The first study did not happen until 2005. Prior to then, white matter was relatively
difficult to study. It was possible to measure white matter volumes and to do correlations with
that and intelligence (revealing a large sex difference), but the details of how white matter tracts
were organized were hidden. DTI has opened a new field of research-brain connectivity (wiring).
Among the things that have been found are that the tracts form bands (in some places) that are
When water movement is detected by the MRI process, it can be quantified as to the degree to
which the molecules move in the same direction. This parameter is known as fractional
anisotropy (FA) and is higher when the movement vectors are directionally similar. If FA is low, it
indicates that the water movement is more diffuse, and this is taken to be an indication of low
tissue integrity. Higher FA is a positive correlate of intelligence for both white and gray matter.
Magnetoencephalography (MEG)
Unlike other methods of brain imaging, MEG is completely passive and is a direct observation of
the brain, while other techniques are measuring secondary phenomena (isotope decay, water
movement, etc.). MEG is thus totally safe and noninvasive.
When compiled into a movie, brain activity can be seen as a function of time. This was
demonstrated (by Thoma) at the 2005 ISIR conference, showing the brain reacting to a simple
MEG remains as a new tool with a limited history for intelligence researchers. It has great
promise and is being evaluated by researchers. An example of an MEG movie, made while the
subject is solving a test item from the paper-folding task, can be found here:
www.cambridge.org/us/academic/ subjects/psychology/cognition/ neuroscience intelligence
(select: student resources, then animations, then animation_4.3.mp4).
Genetics
Although Galton observed that intelligence was a family trait, the role of genetics in determining
intelligence was not understood for many decades. In the 1960s, even scientists believed that
intelligence was largely a product of the environment (books in the home, encouragement to
excel in academics, etc.). When Arthur Jensen entered the field, that is exactly what he
expected to find, but when he looked at real data, he saw a different story. The result was his
80-page landmark paper: "How Much Can We Boost IQ and Scholastic Achievement?" by
Arthur R. Jensen, University of California, Berkeley, Harvard Educational Review, Vol. 39, No. 1,
Winter 1969, pages 1-123.
From that point on, Jensen published a huge number of papers and books that addressed the
issues related to demonstrating that intelligence is primarily the product of genes, with little
environmental variance. Of the environmental variance that is found, it can be divided into the
shared and the nonshared environmental factors. The former is that part of the environment that
makes us more similar (family), and the latter is that part that makes us more different. There is
a shared environmental variance in early childhood, but it vanishes by about age 12, leaving
only the experiences people have as individuals (the following factors lower intelligence), such
as: injury, disease, exposure to toxins, etc. From early childhood on, the heritability of
intelligence increases (the Wilson Effect) into adulthood. By adulthood, the heritability of IQ is
85% and the heritability of g is 91%.
Although repeated studies have shown this high heritability of intelligence, attempts to find a
single intelligence gene (or a few genes) have failed, despite methodologies that would have
found it without doubt. This research has been led by Robert Plomin, who has authored
numerous papers on the topic of the genetics of intelligence.
What is going on? The simple answer is that intelligence genes have been found, and each has
accounted for only a percent or less of the total variance. As has been the case for other traits,
intelligence is the product of hundreds or thousands of variants. For example, height has been
shown to be determined by more than 900 variants. The two concepts that relate to this are
pleiotropy (one gene affecting multiple traits) and polygenicity (many genes affecting one trait).
Further Reading
For those who are interested in reading original intelligence research papers, there is only one
print journal dedicated to this subject: Intelligence. It is the official journal of ISIR and is the
source of some of the best research papers. Another source that frequently contains top-quality
work is Personality and Individual Differences. In the area of brain imaging, there are worthwhile
papers in Neuroimage, Neuroscience, and Cortex.
Haier, Richard J., (2017), The Neuroscience of Intelligence, New York: Cambridge University
Press. This book is recent and was skillfully written to be easily readable, yet complete with
respect to present-day understandings.
Haier, R.J., (2013), The Intelligent Brain, The Great Courses, Chantilly, Virginia (3 DVDs).
The first DVD is a review of non-imaging research. It then gets into the very interesting work that
Haier and his colleagues have done.
Jensen, A. R., (1998), The g Factor: The Science of Mental Ability, Westport, CT: Praeger.
Written by the most outstanding intelligence researcher of the second half of the 20th century,
this book was, and presumably still is, the all-time most cited book in this field.
For those who want excellent and accurate information that is written for public consumption
(some exceptions), I strongly recommend the articles and papers by Linda Gottfredson. She has
generously made virtually everything she has written available on her web page:
http://www1.udel.edu/ educ/gottfredson/reprints.
David Redvaldsen
[The full open-source article, including all tables, references and appendices, is available
through this link: https://www.mdpi.com/2624-8611/2/2/10/htm]
Abstract: The Mega and Titan Tests were designed by Ronald K. Hoeflin to make fine
distinctions in the intellectual stratosphere. The Mega Test purported to measure
above-average adult IQ up to and including scores with a rarity of one-in-a-million of the general
population. The Titan Test was billed as being even more difficult than the Mega Test. In this
article, these claims are subjected to scrutiny. Both tests are renormed using the normal curve
of distribution. It is found that the Mega Test has a higher ceiling and a lower floor than the Titan
Test [Editor’s Note: Grady Towers found the same: “The Titan has much less floor than the
Mega Test, but almost as much top.” -Grady Towers]. While the Mega Test may thus seem
preferable as a psychometric instrument, it is somewhat marred by a number of easy items in its
verbal section. Although official scores reported to test-takers are too high, it is likely that the
Mega Test does stretch to the one-in-a-million level. The Titan Test does not. Testees who had
previously taken standard intelligence tests achieved average scores of 135–145 IQ on those.
Since the mean of all scores on the Mega and Titan Tests was found to be IQ 137 and IQ 138,
respectively, testees had considerable scope to find their true level without ceiling effects. Both
are unusual and non-standard tests which require a great deal of effort to complete.
Nevertheless, they deserve consideration as they represent an inventive experimental method
of measuring the very highest levels of human intelligence and have been taken by enough
subjects to allow norming.
[Editor’s Note: Grady Towers’s “Some Observations on the Titan Intelligence Test,” including g
loadings, can be found here: http://miyaguchi.4sigma.org/hoeflin/titan/gradynorm.html
Key Takeaways: “The general factor accounts for slightly more than 76 percent of the total test
variance. Loadings on g rarely exceed 0.8 on even the best tests, so these numbers are
unusually good.”
“The test as a whole was found to have a Kuder-Richardson formula twenty reliability of 0.952,
which is excellent and compares favorably with the very best mental ability tests in existence.
The odd-even split-half reliability of 0.965 was also found.” -Grady Towers]
Intelligence tests were invented by Alfred Binet and his student Théodore Simon in 1905 with
the purpose of identifying pupils in need of remedial help in French public education. Within a
few years, they had been translated into English and were to reach their apogee in the United
States where Lewis M. Terman, a young professor of education at Stanford University, made his
reputation as the foremost authority on all matters connected with intelligence. Terman’s first
book on the topic, The Measurement of Intelligence, featured examples of individuals within the
various classifications. By the time his The Intelligence of School Children was published three
years later, it was clear that Terman’s primary interest was in subjects scoring at the highest
levels. He had already begun a study of exceptional children, which became the basis for
longitudinal research into the lives and careers of the gifted.
This study, published in five volumes as Genetic Studies of Genius between 1926 and 1959,
required the construction of a special instrument to accommodate Terman’s subjects as adults,
called the Concept Mastery Test. This marked the beginning of experimental research on adults
in the intellectual stratosphere using psychological techniques. Due to the rarity of the
individuals concerned, it was fraught with practical difficulties. One possible method was to give
adolescents achievement tests designed for adults. That was the approach chosen by the Study
of Mathematically Precocious Youth, which began in 1971 at Johns Hopkins University and
which, despite its name, also considered verbal ability. Students who scored at the highest
levels on college admission tests at the age of 13 must, logically, be even brighter than the most
able ordinary freshmen.
In contrast to this well-funded academic project, extending the scale of intelligence to the
highest conceivable levels was an endeavor solely taken up by amateurs. Their method was to
publish self-authored tests and to form societies for those who received the highest scores on
them. In this way, more could be learned about intellects of the very highest order. Omni
magazine, devoted to popular science and science fiction, published three such tests between
1979 and 1990. Because of Omni ‘s large readership, enough responses were received to allow
official scoring of these tests with at least a semblance of being exact. The procedure of the
designers was to compare the number of correct answers yielded by participants and their
self-reported previous performance on standard educational or intellectual scales. The data
were submitted by mail. It was, of course, an experimental method, because there could be no
supervision of test-takers or control of whether the reported scores on the standard tests were
accurate.
These three tests were the Langdon Adult Intelligence Test, the Mega Test and the Titan Test.
They are the only credible tools for the measurement of intelligence at levels above the ceilings
of the traditional instruments - the Stanford–Binet, first developed by Terman, and the Wechsler
Adult Intelligence Scale (WAIS). The Concept Mastery Test is purely verbal or educational,
which means it cannot capture numerical or logical thinking, seen as essential components of
2. Object
In this paper, we will investigate the Mega and Titan Tests, designed by Ronald K. Hoeflin and
published in Omni magazine in April 1985 and April 1990, respectively. We wish to discover
whether their author’s claims for them are well-founded. The Mega Test was billed as
discriminating up to the one-in-a-million level of the general population as for intelligence, while
the Titan Test was designed to be even more difficult. If this is verified, they could potentially
help to identify the most gifted adults imaginable. This is a topic of some interest as the study of
genius is one of the oldest concerns within psychology. As the Mega and Titan Tests are
relatively unknown tools serving a niche market, we additionally wish to consider whether they
are suitable for wider use by psychologists. The Langdon Adult Intelligence Test would also
have been considered if its norming data had been made public. It is believed to have been
taken by more than 20,000 individuals and was normed on the basis of recognized intelligence
tests.
3. Method
The designer’s method is the experimental measurement of the very highest levels of human
intelligence. It is experimental in the sense of being based on unrecognized techniques which
are put forward for consideration. It is also experimental in retaining some features of previous
practice, while changing others in pursuit of a particular outcome. Dr. Hoeflin saw intelligence as
a composite of verbal, numerical and spatial skills. Standard item formats such as analogies,
number series, logical progression and mental manipulation of three-dimensional objects were
included. However, he dispensed with a time limit and permitted the use of reference materials
and, in one case, pocket calculators. These novelties may be justified by seeking to privilege
intellectual power over speed and the correct application of knowledge rather than merely
possessing it. Our method, on the other hand, goes back to first principles in simply mapping
the raw scores obtained on the tests onto the normal curve of distribution. An assumption
behind the norming of tests of mental ability is that intelligence is a variable characteristic which
is distributed normally [Editor’s Note: Via a bell, or Gaussian, curve]. This also matches the
empirical realities, according to a meta-study of ten mostly well-known surveys of intellect. We
are not primarily interested in the predictive validity of the tests since they are designed for
adults, but whether they can be used to identify the presence of intellectual power beyond what
the standard tests allow. If they do, the entire range of human intellect would be available for
study according to a common criterion (psychologically tested intelligence). A critique of the
Mega Test already exists which focused on the violation of psychological practices inherent in
accepting testing without supervision and norming from self-reported data. The reviewer felt that
its accuracy would be increased if the test were taken under controlled conditions. As it stands,
the library resources available to test-takers are a factor in the score generated. Even so, Dr.
Carlson recognized that the author made such a choice for practical reasons. Hoeflin did not
have access to a large pool of individuals who could take the test under controlled conditions for
4. Limitations
As we are probing an experimental attempt to extend the range of the scale of intelligence, we
are aware that our research has several limitations. Chief among them is the problem of validity
intrinsic to the Mega and Titan Tests. No data have been published which shows Titan Test
correlations with other intelligence tests, while the Mega Test correlates only 0.374 with the
Stanford–Binet and a mere 0.137 with the WAIS. It is true that it correlates more highly with
some other intelligence tests (0.565 with the Army General Classification Test and 0.562 with
Cattell), but can we be sure that these tests measure what they purport to do? The lack of a
time limit has the effect of rewarding persistence and intense interest in the subject matter over
actual capacity in a real-world setting. In removing speed as a factor, the Mega and Titan Tests
also define intelligence differently to the established understanding manifest in virtually all other
tests, while simultaneously giving rise to scores on what is presented as the same scale. Most
of the questions on these tests are very difficult and risk conflating puzzle-solving skills with
general ability [Editor’s Note: The kiss of death? What is a challenging question on a traditional
IQ test’s matrix reasoning section if not "puzzle-solving skills"?]. Especially the Titan Test
contains too many spatial items to be representative of g, the general factor underlying thinking.
Neither it nor the Mega Test can be used on the general population, and consequently the
lowest raw scores are uncertain. The tests are, however, reliable, as they would be scored
identically by any marker since there is a single correct answer to each question. A present
limitation for us is that we are relying on a non-standard source (a web page) for scores on the
Titan Test, as Omni magazine did not continue coverage of the topic after 1990. We have no
direct method of norming the Mega Test other than by the self-reported previous test scores of
Omni participants (also from the web page) and the Titan Test, in turn, is normed from
self-reported scores on the Mega Test. Although the tests are examined here in case they are,
or may be adapted to be, useful to psychologists and researchers, there is no guarantee that
the highest scorers on them are representative of the statistical group to which they belong.
They are a self-selected sample, possibly with excess time on their hands.
The Mega Test consists of 48 items, of which 24 are verbal analogies, 12 spatial problems, 6
number series and 6 other numerical problems. Two of the questions are multiple choice, but
there is no penalty for wrong answers on these or other questions. There is no time limit, though
it is suggested the subject spend no more than one month. Reference materials and pocket
calculators are permitted. Given that thesauri can be used, a number of questions in the verbal
section become relatively easy. The January 1986 issue of Omni carried a score report for the
The variance can accordingly be calculated as 221,306/3,258 = 67.93. The standard deviation is
therefore √ 67.93 or 8.24.
As established, we have a mean of 15 and a standard deviation of 8.24. It was also reported in
the January 1986 issue of Omni that the mean IQ for its readers on the Mega Test had been
141 (on the scale used by the Stanford–Binet, which traditionally had a standard deviation of
16). Dr. Hoeflin arrived at this value by collating previous scores on intelligence and
achievement tests reported by participants. We have chosen to calculate the mean IQ on the
basis of four intelligence tests alone: the Cattell, the California Test of Mental Maturity (CTMM),
the WAIS and the Stanford–Binet. Scores were reported on other tests too, but since the
standard deviations for those are not as clear as for our chosen instruments, they were not
taken into consideration by us. This is particularly true for the Army General Classification Test.
The major advantage in introducing the deviation IQ [as opposed to ratio IQ calculated via
mental age/chronological age multiplied by 100 -Ed. Note] was that it should conform to the
normal curve of distribution. The number of scores above the mean do show a generally
declining tendency. We therefore decided to map raw scores above the mean onto the normal
curve. The shape of the normal curve means that scores taper off sharply above 140 IQ. Any
score above the mean or below the mean are less common occurrences, but on this test, raw
scores below the mean increase in frequency. Therefore, a different method will be used to
calculate raw scores below the mean. We divided scores at and above the 137 IQ-level into
intervals of 3 IQ points. To substantiate just how rare the very highest scores are supposed to
Table 4. Relative frequency of particular IQ levels according to the normal curve of distribution
There were 1,566 testees who scored 15 or higher on the Mega Test among the Omni
readership. We place these scores into the various intervals of the grid constructed from the
normal curve of distribution.
The highest scorer on the Mega Test among the Omni r eadership solved 45 correctly. This
represents an IQ of 170 or slightly above. There were three subjects who solved 44 correctly
and their associated IQs would be 165–170. We decided to assign them to the 167–169
category, as this allows good approximations of 43 and 42 as raw scores in the 164–166 and
161–163 categories, respectively. No reader scored above 45 and we simply do not have the
data to tell us what IQ levels these raw scores represent. Because we originally believed the
Titan Test to be harder for all raw scores, our aim was to extrapolate from that test to Mega raw
scores above 45. It will be shown below, however, that the Mega Test is more difficult near the
ceiling.
On the basis of the present norming, we believe that the Mega Test is indeed able to yield IQs
at the one-in-a-million level, a threshold which is attained at a raw score of either 46, 47 or 48.
[Editor’s Note: Interestingly, Kevin Langdon reckoned that the one-per-million level should be
pegged at 47/48 correct on the Mega Test - Kevin generously allowed one point for “ceiling
bumping,” rendering the proposed Mega cutoff score 46/48 - and said 43/48 on the Mega Test
corresponded to an IQ of 172, sigma = 16. https://megasociety.org/noesis/140/admstds1.html]
As for the scores below the mean, we calculate them from one standard deviation equaling 16
IQ points. In this way, each question solved correctly up to the mean adds 16/8.24 or 1.94 IQ
points to one’s score ( see Table 6). T
his gives us:
The Titan Test consists of 48 items, of which 24 are verbal analogies, 6 are number series, 17
are spatial problems and 1 is a complicated calculation. It was designed to be more difficult than
the Mega Test. Hardly any questions are intuitive and almost all require a substantial amount of
effort. There is no multiple choice nor penalty for incorrect answers. Test-taking time is unlimited
and could require more than a month, reference materials are allowed but not calculators or
computers.
The Titan Test was attempted by 391 Omni readers. This was only a fraction of the number of
responses received for the Mega Test, but is nevertheless high for a test of this nature. The
scores of the Omni participants were reported to Mr. Miyaguchi and appear on his website.
We begin by calculating the mean, which is 4,556/391 = 11.65. Once we also have the standard
deviation for the test, we are in a position to begin norming it.
The variance is therefore 42,455/391 = 108.58. So the standard deviation is √ 108.58 = 10.42.
What IQ level does the mean of 11.65 represent? Unlike for the Mega Test, previous test scores
of Omni participants are not available. The only usable data we have for norming is a table of
paired scores for testees who attempted both the Mega and the Titan Tests from early 1999,
several years after the latter was published in Omni.
This indicates that the mean score on the Titan Test was lower than on the Mega Test for
participants who took both. Of the 83 participants [who took both tests -Ed. Note], 52 achieved a
lower raw score on the Titan Test than on the Mega Test. Their mean score was 22.5 on the
Titan Test versus 24.6 on the Mega Test. This is considerably higher than the mean of 11.65 on
the Titan Test for all Omni participants and not particularly helpful for determining the IQ at the
mean. We decided to find the Mega Test scores of Titan Test-takers who were close to the
mean of 11.65 by considering scores at 10 to 13 on the latter. Their Mega Test scores were 18,
25, 23, 19, 15, 14, 11 and 15 compared to their Titan Test scores of 13, 12, 12, 12, 11, 11, 11
and 10, respectively. Their mean Mega Test score was therefore 17.5 and their mean on the
Titan Test was 11.5. Now 17.5 on the Mega Test is equivalent to an IQ of 138.5, so if we
estimate a raw score of 11 on the Titan Test as being equivalent to an IQ of 138, we have a
base for our norming. It is also the same value as Hoeflin calculated in the official norming and
he had access to the test-takers’ previous IQ scores.
Table 10. Relative frequency of particular IQ levels according to the normal curve of distribution
There were 167 test-takers who scored 11 or above and thus qualified to be inserted into our
grid. We sort the test-takers into categories on the basis of their raw scores, as can be seen in
Table 11.
Table 11. Distribution of testees in the intervals and associated scores on the Titan Test
Table 11. Distribution of testees in the intervals and associated scores on the Titan Test
153–155 5 39
156–158 2.25 40
162–164 0.5 44
168–170 0.125 48
171–173 0.063
174–176 0.023
177–179 0.0075
180–182 0.0038
183–185 0.0015
TOTAL 166.9638
The clustering at the bottom is probably a result of there not being enough test-takers to get
more precise results. Getting seven more questions right should raise IQ by much more than
two points. It is a problem that we have fewer than 400 test-takers, as the mean IQ is based on
the Mega Test to which there were more than 3,200 responses. The Titan Test appears to be
able to discriminate up to the one-in-a-hundred-thousand level, but more extravagant claims do
not seem well-founded. As for the scores which were below the mean, we use the mean and the
standard deviation to estimate them, in preference to the normal curve to which this array of
scores does not conform. As before, we assign 16 IQ points to one standard deviation of 10.42,
counting from IQ 138, which we placed at raw score 11. Each question answered correctly up to
the mean therefore yields 16/10.42 IQ points, or an increment of 1.54 IQ. Hence, we obtain the
following norming for the test:
This norming is surprisingly low in context, as virtually all the questions on the Titan Test are
difficult, unlike the Mega Test which also includes relatively easy items. The mean of the Mega
Test is 15 and the standard deviation is 8.24. For the Titan Test, we found a mean of 11.65 and
a standard deviation of 10.42. Each test has 48 items of equal weighting. Let us ascertain
theoretically which test is the more difficult at the highest raw score possible.
Empirical evidence also lends credence to the result that the Mega Test is harder at the highest
levels. Out of the 3,258 test-takers, there were 21 who scored 40 or higher on the Mega Test,
which equals a proportion of 0.64%. For the 391 Titan Test subjects, there were 5 who scored
40 or higher, a proportion of 1.28%. For the paired scores, we notice that the highest scorers on
the Titan Test normally achieved a lower raw score on the Mega Test. A perfect score of 48 on
the Titan Test, achieved by one subject, equals about 44 on the Mega Test. The second highest
score on the Titan Test, when considering only Omni participants, was 44, which equals about
42 on the Mega Test.
7. Conclusions
The renorming of these tests has indicated that the official scores reported to participants are
too generous in almost all instances. According to our results, the designer’s most recent
norming of the Mega Test [Editor’s Note: Sixth Norming of Mega Test:
http://miyaguchi.4sigma.org/hoeflin/meganorm.html] is too high by six IQ points at a raw score of
10, five IQ points at a raw score of 20, ten IQ points at a raw score of 30 and eleven IQ points at
a raw score of 40. The Titan Test has only been normed once. That norming, we believe, is too
high by three IQ points at a raw score of 15, by five IQ points at a raw score of 20, ten IQ points
at a raw score of 30 and thirteen IQ points at a raw score of 40 [Editor’s Note: Titan Test norms:
http://miyaguchi.4sigma.org/hoeflin/titan/titanorm.html]. Scores on the Mega Test are boosted
because its verbal section contains a number of items which can be solved without much effort.
The verbal section of the Titan Test is more abstruse, requiring greater knowledge, more
elaborate fact-finding and more thought as to what is being asked for. Therefore, our norming is
almost identical to Dr Hoeflin’s up to a raw score of 11.
It is a surprise that the ceiling of the Mega Test seems to be higher than that of the Titan Test.
Even a cursory glance at the two tests gives the impression that the Titan Test is harder, and it
The decisive issue is whether these tests can be useful to psychologists. Our norming does
indicate that the tests go above the ceilings of established tests [The WAIS-III, e.g., has a
ceiling of IQ 155, sigma 15, and WAIS-IV has a ceiling of IQ 160, sigma 15. -Ed. Note]. Subjects
who achieve a raw score above 40 are of such exceptional ability that standard tests are unable
to measure them adequately. Scores above 5 or so on the Titan Test and scores above 11 or so
on the Mega Test also betoken giftedness in the subject. For detecting this, the experimental
tests are alternatives to the many accepted tests which operate with a ceiling of only 2 to 2.5
standard deviations above the mean. If the experimental tests were to be adopted by a
researcher with the resources necessary to combine them in such a way that the easy verbal
and any other faulty items were eliminated, they might serve as a useful complement to other
high-range instruments such as the Concept Mastery Test or the Miller Analogies Test. This is
especially true because the experimental tests offer many non-verbal questions. New norms
would of course have to be established for the improved test or tests. Short forms of the tests
could also be created which select the best items. [The Prometheus Society, a high-IQ society
catering to those scoring at least four sigma above the mean, currently accepts a scaled score
of 500 on the Miller Analogies Test and once accepted a raw score of 21/27 on the Mega27, a
shortened version of the Mega Test. -Ed. Note] Item Response Theory would be useful here.
The object would be to choose the items which act as the greatest indicators for the levels of
ability which surpass the norms on the standard tests available. The Mega and Titan Tests,
however, cannot be used on their own and in their current form by psychologists, owing to the
lack of supervision associated with them and the extremely lengthy test procedure.
Ron Hoeflin
I did not read the entire report by David Redvaldsen but I did read his norms for Table 12 as
well as his entire "Conclusion." I am not a statistician, so I would recommend someone like Fred
Britton, who took many courses in statistics at the University of Illinois (Urbana-Champaign) at
the graduate level [Fred Britton was taught by Raymond Cattell, one-third of the namesake of
Cattell-Horn-Carrell (CHC) theory in psychometrics -Ed. Note] to examine the statistical claims
of this report. It does puzzle me, however, that eliminating the easier problems would increase
the ceiling of the test. That seems counterintuitive. Maybe the author is assuming a normal
curve, which is known not to apply with much success to high-ceiling tests. I think that including
easier problems is essential in order to entice a wider audience to try such tests. My Titan Test
started with perhaps unduly difficult verbal problems that would not entice many people to try
the test, e.g., the problem "Strip : Mobius :: Bottle : ?" As of now only one person, after 30
years, has attained a perfect score on the Titan Test, and only a handful have attained a score
of 43 right out of 48, which is used as the cut-off for the Mega Society. The Mega Test was
similar until there was a spate of very high scores, which may have been due to leakage of
answers on the Internet. But Redvaldsen's results presumably are unaffected by this spate of
very high scores because they occurred long after the initial batch of 3,200 Omni readers had
tried the test. It was about 20 years before anyone achieved a perfect score on the Mega Test
(not counting two pre-Omni p erfect scores, which were partially due to cheating by at least one
of the two, and these two early perfect scores would presumably not have been included in
Redvaldsen's study). Tests like the SAT and GRE are administered to such a large audience
that new problems can be tried experimentally with hundreds of thousands, perhaps millions of
people, before they are counted in scoring these tests. Unfortunately, the high-IQ societies do
not have such a large group of guinea pigs. I did use volunteers from the Triple Nine Society, for
which I was serving as editor, and I chose problems that about half of them missed, so that a
score of 24 out of 48 seemed about right for the 99.9 percentile. It was interesting to me that the
percentage of people who could solve the three-interpenetrating-cubes problem in the Mega
Test rose dramatically with higher overall scores, even though 50% of the test was verbal and
12.5% involved number sequences, so that the ability to solve such a difficult spatial problem
should not have correlated very well with the rest of the test unless the overall test was actually
doing a pretty good job in measuring general intelligence. For the first 3,200 people who tried
the Mega Test, 13 scored 43 to 48 right, of whom 7 people (53.8 percent) solved the cubes
problem correctly. Of the 304 who scored 37 to 42 right, 31.2 percent solved the cubes problem.
Of those who scored 31 to 36 right, 10.5 percent solved the cubes problem. Of those who
scored 25 to 30 right, 4.9 percent solved the cubes problem. Of those who scored 19 to 24 right,
0.7 percent solved the cubes problem. Of those who scored 13 to 18, 0.6 percent solved the
cubes problem. Of those scoring 7 to 12 right, 0 percent solved the cubes problem. And of these
who scored 0 to 6 right, 0 percent solved the cube problem. I don't see how fussing around with
the low end of the scale can shed much light on where the one-in-a-million level should occur.
We do know that several people who got PhDs from places like Caltech and M.I.T. managed to
Ron Hoeflin
Ken Shea
"How does the water of the brain turn into the wine of consciousness?" -David Chalmers
""I'm writing a book on magic," I explain, and I'm asked, "real magic?" By real magic people mean miracles,
thaumaturgical acts, and supernatural powers. "No," I answer: "Conjuring tricks, not real magic."
Real magic, in other words, refers to magic that is not real, while the magic that is real, that can actually be done, is
not real magic." -Lee Siegel (conjurer and author of Net of Magic: Wonders and Deceptions in India)
“Eliminative materialism is the thesis that our commonsense conception of psychological phenomena constitutes a
radically false theory, a theory so fundamentally defective that both the principles and the ontology of that theory will
eventually be displaced, rather than smoothly reduced, by completed neuroscience.” -Paul Churchland
One could argue the color blue on a phenomenological level doesn't truly exist externally.
Indeed, phenomenology doesn't exist externally unless one takes an expansive view of the
mind-body problem, as some phenomenal externalists and revivalist panpsychists have done.
Presupposing blue doesn’t phenomenologically inhere to the real world, then what exists
externally? The electromagnetic radiation is real and external, but the transducted internal
perception of the particular wavelength frequencies corresponding to supposed color qualia
vis-à-vis the electromagnetic radiation (the average human eye can see wavelength frequencies
from approximately 380 nanometers to 740 nanometers) is arguably illusory insofar as the
internal perception might well be mistaken or subjective. In any event, there’s transduction.
Contending otherwise risks falling prey to naïve realism or Antti Revonsuo’s “paradox of
isomorphism.” Antti Revonsuo has this to say: “The problem of isomorphism is that if
consciousness literally resides in the brain, then there must be something in the brain that
literally resembles or is similar to consciousness - that is to say, consciousness itself”
(“Prospects for a Scientific Research Program on Consciousness,” Neural Correlates of
Consciousness, page 67). Classically resolving the paradox of isomorphism means finding an
organizational structure in the brain that resembles the contents of phenomenal consciousness.
This may be more of a category error than paradox for reasons that will become obvious, e.g.,
the vehicle and content in a representational context can be distinct, though Antti Revonsuo
seems hostile to this aspect of teleofunctionalism. Researchers Gerald Edelman and Giulio
Tononi have proposed bringing the entire idea of qualia up to date by saying, “Qualia can be
considered scientifically as forms of multidimensional discrimination carried out by a complex
brain.” The vehicle for the content of color, the folk-psychology notion of color qualia, entails
visual cortex area 4 (i.e., V4), particularly the lateral occipitotemporal gyrus (i.e., fusiform gyrus)
and the medial occipitotemporal gyrus (i.e., lingual gyrus), once creatures with trichromatic
vision (the three types of retinal cone photoreceptor cells are known as S, M, and L cones for
short, medium, and long cones, respectively) process color information axonally through the
optic nerve and lateral geniculate nucleus.
The transformation of these sensory activation vectors, temporally and through an extremely
complex matrix via massively parallel processing, facilitates a representational,
neurocomputational model of mentation. Sensory patterns “get transformed mainly by the vast
filter of synaptic connections they have to traverse in order to stimulate the cortical population.
The result is typically a new pattern across the cortical canvas, a principled transformation of the
One could go further than updating the idea of qualia and posit that the self is an illusion, as the
German neurophenomenologist Thomas Metzinger has done in Being No One: The Self-model
Theory of Subjectivity as well as The Ego Tunnel: The Science of the Mind and the Myth of the
Self. The folk-psychology notion of self is simply a phenomenal self-model per Metzinger. At
root, the phenomenal self-model theory of subjectivity is a representationalist model that seeks
to explain how the brain interprets and interacts with mental constructs phenomenologically
represented as epistemically veridical aspects of frequently reciprocal inner and outer worlds (cf.
Immanuel Kant’s inner and outer sense). These conclusions about the so-called self and
epistemic fallibility qua phenomenology may be difficult for most people to concede, disquieting
to ponder and perhaps slippery to conceptually grasp given widespread phenomenal
transparency, but there is mounting empirical support for such a position.
Phenomenology and epistemology need not align perfectly. [1] In fact, Buddhists have long
contended that the self is an illusion; the Buddhist concept of anatta, one of the supposed three
marks of existence, means “non-self” or “substanceless”; Buddhists maintain that the notion of a
self or transcendental soul is falsidical. The idea in Buddhism is that an individual is potentially
the moment-to-moment amalgamation of five aggregates (a.k.a., skandhas): form (rupa),
sensations (vedana), perceptions (samjna), mental activity (samskara), and awareness
(vijnana). Suffering or unsatisfactoriness (dukkha), Siddhartha Gautama believed, manifests as
people cling to one or more of the aggregates. Ontologically, because all of the five aggregates
(or factors) are constantly changing and empty (i.e., sunyata), the so-called self (or product of
these factors) cannot have permanence or ontological substance - remember, anatta means
substanceless. Buddhist teaching and Thomas Metzinger’s phenomenal self-model theory of
subjectivity are in harmony insofar as each views the self as a dynamic process rather than an
independent, permanent, transtemporal ontological object, let alone such an ontological object
necessitating a private ontological realm à la Cartesian dualism.
Evolution has rendered humans epistemic naïve realists by default because having one
streamlined, globally represented view of reality has advantages in terms of the assignment of
resources (e.g., glucose), psychomotor planning [6], and remaining largely in the dark about the
building blocks that contribute to subsequently represented global phenomenal state space.
Bluntly, computational load is reduced. Getting lost in the labyrinthine preliminary stages of
phenomenal consciousness would have been dangerously impractical, and had a higher
metabolic cost, for progenitors of the human species; as well as other animals. What's important
to remember is that phenomenal transparency is inversely related to the degree of attentional
availability with respect to these preliminary stages of phenomenal consciousness. Such a
definition implies that phenomenal transparency is on a continuum with phenomenal opacity and
that a significantly undiluted version of the latter has major evolutionary hurdles to overcome,
but phenomenal opacity is definitely possible (e.g., lucid dreams) in principle. In such aberrant
cases, one intimately understands that reality is simulation. As a rule, every representation is a
simulation but not every simulation is a representation. On a prosaic level, logico-semantic
reasoning tends to be less transparent, less streamlined, because it is a more recently acquired
mental ability; fewer evolutionary refinements have rendered logico-semantic reasoning more
phenomenally opaque (n.b., the high metabolic cost of rigorous step-by-step thinking), and the
The Binding Problem, Innumerable Easy Problems, and the Hard Problem
There’s a staggeringly real possibility that both qualia and phenomenology are fictions in this
model. Philosopher Diana Raffman has claimed that people are better able to discriminate
perceptual values than identify perceptual values. Earlier you learned that the visible spectrum
encompassed wavelength frequencies from 380 nanometers to 740 nanometers on the
overarching electromagnetic spectrum. Research shows that between 430 and 650
nanometers, or almost two-thirds of the visible spectrum, a human being can discriminate about
150 different wavelength color frequencies by comparing these wavelength color frequencies to
one another; however, the number drops from 150 different wavelength color frequencies to 15
different wavelength color frequencies when forced to singly identify particular colors
attentionally or semantically. The technical term for this is a lack of “introspective identity
criterion.” Invariably, the ineffable 135 colors (150 - 15 = 135) qua perceptual values have a
neural correlate of consciousness, i.e., the empirical state of the brain is different in each of
these 135 cases even though the perceptual values cannot be identified with introspective
attention or articulated. Daniel Dennett and Paul Churchland have both suggested that qualia as
such, i.e., as classically defined by Clarence Irving Lewis, do not exist and consciousness may
be a kind of higher-order illusion; philosopher Paul Churchland counsels essentially supplanting
Generally, conceptual-empirical progress can be made by mapping one theory onto another. In
a certain sense, indeed, this is a form of reduction, but reduction itself is a relationship between
theories rather than phenomena (cf. Johannes Kepler’s three laws of planetary motion getting
encompassed by Isaac Newton’s more broadly explanatory three laws of motion in classical
mechanics). The problem arises when one theory promotes concepts (e.g., qualia) which resist
mapping onto another theory (e.g., neuroscience). Consider that qualia are said to have the
following properties: ineffability, intrinsicality, privateness, and immediate apprehensiveness.
This creates a serious problem for mapping one theory onto another theory because of the
incoherency or ineffability of a pivotal constituent concept of one theory. Because an identity
criterion does not seem possible, in principle, for qualia (“ineffability”), the only way forward is
elimination: A lack of identity criterion for qualia and perhaps consciousness, or the inability to
treat these concepts as veritable theoretical entities, may require elimination. Consciousness
researchers Paul Churchland and Frank Jackson have something interesting to say here.
● Incompatibility with closely neighboring theories that are performing extremely well (cf.
vitalism vis-à-vis metabolic and molecular biology)
● Poor extension to domains continuous with but outside the domain of initial performance
(cf. Netonian mechanics in strong gravitational fields or high relative velocities)
Frank Jackson, within “Finding the Mind in the Natural World,” equates science with “serious
metaphysics” and says that, “Serious metaphysics is simultaneously discriminatory and
putatively complete, and the combination of these two factors means that there is bound to be a
whole range of putative features of our world up for either elimination or location.” Because of a
Stepping back, there is a very logically compelling idea in philosophy of mind that suggests
taking a measurement of the brain’s state in deep sleep (presumed to be the baseline or strictly
necessary physiological condition for consciousness to arise upon waking) and subtracting that
state from the brain’s measured state in non-pathological, human waking consciousness will
thereby subtract the necessary conditions from the necessary and sufficient conditions, isolating
the sufficient conditions for consciousness to arise. The same kind of procedure could be
performed to compare non-pathological with pathological cases in looking for the neural
correlates of consciousness. What exactly distinguishes Cotard’s syndrome or “walking corpse
syndrome” from associated conditions, such as schizophrenia and psychotic breaks? How do
these differences relate to potential fractures in the phenomenal self-model?
This kind of logic almost seemed like a background assumption for the following: Nobel
Prize-winning biologist Gerald Edelman and neuroscientist Giulio Tononi teamed up to exposit
the empirically testable dynamic core hypothesis. Their preferred modes of empirical exploration
for confirming the dynamic core hypothesis were functional magnetic resonance imaging (fMRI),
topographic electroencephalogram (EEG), and magnetoencephalography (MEG) because these
offered the wide spatial coverage and high temporal resolution to reveal virtually real-time [8]
changes in conscious and subconscious processing vis-à-vis functional clusters and reentry.
Functional clusters are defined as, “A subset of elements within a system will constitute an
integrated process if, on a given time scale, these elements interact much more strongly among
themselves than with the rest of the system. Such a subset of strongly interacting elements that
is functionally demarcated from the rest of the system can be called a functional cluster.” Gerald
Edelman and Giulio Tononi found that, in general, weak, degraded, or ephemeral stimuli had a
lower probability of facilitating quick, strong, and distributed neural interactions, a kind of sine
qua non for conscious perception, Edelman and Tononi reckoned. The two core tenets of the
dynamic core hypothesis are thus - “A group of neurons can contribute directly to conscious
experience only if it is part of a distributed functional cluster that, through reentrant interactions
in the thalamocortical system, achieves high integration in hundreds of milliseconds” and,
second, “To sustain conscious experience, it is essential that this functional cluster be highly
differentiated, as indicated by high values of complexity.” The process of reentry is defined as,
“the ongoing, recursive, highly parallel signaling within and among brain areas” (“Reentry and
the Dynamic Core,” Neural Correlates of Consciousness, page 142). The dynamic core
hypothesis ostensibly makes serious headway with the binding problem (cf. Antonio Damasio’s
“Time-locked Multiregional Reactivation: A Systems-level Proposal for the Neural Substrates of
Recall and Recognition”), partly because it balances integration and differentiation, a
combination of regularity and randomness, conducive to producing an emergent phenomenon.
Let’s further unpack time and complexity for a moment because these are extremely important
to the dynamic core hypothesis. Cognitive scientist Benjamin Libet showed that high-frequency
somatosensory stimuli sent to the thalamus required 500 milliseconds (i.e., half a second) to
produce a conscious experience for the subject whereas less than 150 milliseconds can
produce (subconscious) sensory detection of an environmental change without full (conscious)
awareness (cf. Antonio Damasio’s Protoself and Core Consciousness). Multiple researchers
have demonstrated that sustaining these evoked potentials required the excitation of pyramidal
neurons in the somatosensory cortex via reentrant interactions with higher cortical areas (e.g.,
Cauller, 1995). Reentrant interactions among distributed functional clusters are necessary but
insufficient to produce conscious experience. Consider that the brain during a seizure is
hyperactive, but EEG demonstrates that a large number of brain regions are active in on-off
synchronicity with each other (i.e., non-dynamic, non-complex though diffuse brain activity),
momentarily precluding consciousness. The on-off pattern is echoed in deep sleep, which
typically produces less vivid dreams than REM sleep; notably, cerebral blood flow is globally
reduced in slow-wave sleep. Quick, dynamic, and complex brain activity with plenty of reentry
(as opposed to slower, more predictably distributed, and globally synchronous brain activity)
seems to produce more conscious waking and, indeed, dreaming (viz., REM, or rapid-eye
movement, phase) phenomenological states.
One of the more pronounced, clinically controllable, and clean examples of losing phenomenal
consciousness would appear to be general anesthesia (cf. Hans Flohr’s “NMDA
Receptor-mediated Computational Processes and Phenomenal Consciousness”). Consider that
the lights of consciousness (e.g., NMDA receptor agonists and the thalamocortical network) dim
then gradually brighten for the patient undergoing general anesthesia. Phenomenal
transparency and phenomenal presence seem to precede phenomenal perspectivalness in
returning online following the administration of general anesthesia, which could imply an
evolutionary order and the degree of deep-rooted strength with the neural correlates underlying
each respective feature of phenomenal consciousness. [9] In short, whatever dims then
gradually brightens in the brain to restore consciousness in the patient invariably underlies the
sufficient neural correlates of consciousness and should provide the, perhaps counterintuitive,
solution to the hard problem of consciousness by bridging the explanatory gap.
Eventually, empirical tools offering high temporal resolution and sufficiently wide spatial
resolution, such as magnetoencephalography, could highlight a dynamic core governing
conscious experience by showing quick, dynamic, and complex interactions among distributed
functional clusters freely exhibiting reentrant interactions in the thalamocortical system. Rather
than the global minimally sufficient neural correlates of consciousness being an independent,
permanent, transtemporal ontological object (cf. Ship of Theseus) or being privileged with a
private ontological realm, a shifting dynamic core could underlie phenomenal consciousness.
Age-old mystical teachings about the substanceless nature of self already seem to be
coalescing with modern neuroscience, which reveals the self as an emergent, highly dynamic
phenomenon of biology qua teleofunctionalism. Certain brain areas talking more feverishly
amongst themselves or brain areas always, sometimes, or never included - and, implicitly,
excluded - from this ongoing conversation would have implications for the global neural
correlates of consciousness and help empirically resolve perennial philosophical quandaries.
These might be early days for consciousness research, which remains in the pre-paradigmatic
phase. Neurophilosopher Thomas Metzinger, nonetheless, thinks the global neural correlates of
consciousness will be pinpointed by the year 2050. If this pulse-quickening prediction comes to
pass, discovering the global neural correlates of consciousness will greatly illuminate the past
utility and future possibilities of subjective awareness and enrich what it means to conceive of
oneself as a conscious being in the ever-vanishing present moment, a conjured picture of the
present rooted in the immediate past that enables humans to dream about future worlds.
“If the doors of perception were cleansed, everything would appear to man as it is, infinite.” -William Blake
“The objective measurement of temperature considerably preceded the development of an adequate theory of
temperature and heat, and necessarily so, as the science of thermodynamics could not possibly have developed
without first having been able to quantify or measure the temperatures of liquids, gasses, and other substances
independently of their other properties. Measurement and theory develop hand in hand; it is a continuing process of
improvements in the one making possible advances in the other.” -Arthur Jensen
“All things fall short of absolute certainty: life itself might be a dream and logic a delusion.” -Thomas Sowell
[2] “In order to explain the phenomenal unity of consciousness as a representational phenomenon, we
have to look for the point of maximal invariance of content in the conscious model of reality. What is the
representational content that displays the highest degree of invariance across the flow of conscious
experience? The current theory says that it is to be found in certain aspects of bodily self-awareness and
the conscious experience of agency. There will not only be a changing gradient of invariance within the
phenomenal model of reality (in terms of more or less stable elements of experiential content) but also a
gradient of coherence (in terms of different degrees of internal integratedness between such elements).”
(Being No One: The Self-model Theory of Subjectivity, page 134)
[3] “As recent research in bistable phenomena (e.g., see Leopold and Logothetis 1999) has vividly
demonstrated, if two incompatible interpretations of a situation are given through the sensory modules,
then only one at a time can be consciously experienced. The generation of a single and coherent
world-model, therefore, is a strategy to achieve a reduction in ambiguity. At the same time, this leads to a
reduction of data: the amount of information directly available to the system, for example, for selection of
motor processes or deliberate guiding of attention, is being minimized and thereby, for all mechanisms
operating on the phenomenal world-model, the computational load is reduced.” (Being No One: The
Self-model Theory of Subjectivity, page 136)
[4] “In particular, if it is true that, as I have claimed, the human self-model is always functionally anchored
in a more or less invariant source of internally generated input, then there should be a nucleus of
invariance in a certain part of the unconscious self-model, for instance, provided by abstract
computational features of the spatial model of the body or, as Damasio has hypothesized, in those
brainstem structures continually regulating the homeodynamic stability of fundamental aspects of the
internal chemical milieu.” (Being No One: The Self-model Theory of Subjectivity, page 528)
[5] “The phenomenal world0 as a fixed reference basis for all possible simulations has to be, in principle,
inviolable. This is why the phenomenal world and the phenomenal self not only appear as numerically
identical to us but as indivisible as well - a feature of our phenomenal architecture - which Descartes, in
section 36 of his Sixth Meditation, used to construct a dubious argument for the separateness of mind and
body.” I would claim that there is a higher-order phenomenal property corresponding to this classical
concept of “indivisibility.” It is the phenomenal property of global coherence, and it is this property which
really underlies most classical philosophical notions concerning the “unity of consciousness.”” (Being No
One: Self-model Theory of Subjectivity, page 132)
[6] Psychomotor planning, recurrent loops in neural networks, working memory, and the
phenomenological sense of “nowness” per Metzinger may all be explained - indeed, necessitated - by
global-workspace theories (e.g., Bernard Baars’s A Cognitive Theory of Consciousness). Perhaps an
artificial Now is needed to form a baseline or phenomenal world0 for planning “future” action.
[7] Qualia might someday be likened to yesteryear’s luminiferous aether, phlogiston, and élan vital.
[8] The representational model of the phenomenal world can, strictly speaking, never be truly real-time
because the processing that gives rise to the representational phenomenal world image takes time to
produce. The interpreting phenomenal self-model is remarkably efficient thanks to massively parallel
processing, but the compiled sense data also has a lag. In a neurocomputational sense, the present is, at
best, a picture of the past. In pathological cases, a reliable picture of the past is not guaranteed.
[9] “It is typical for NMDA antagonists like ketamine and phencyclidine to cause bizarre ego disorders.
Patients report what has been called ego dissolution, a loosening of the ego boundaries that may end up
in a feeling of merging with the cosmos, and an ego disintegration, i.e., a loss of control over thought
processes.” (“NMDA Receptor-mediated Computational Processes,” Neural Correlates of Consciousness,
page 253)
Block, Ned, Owen Flanagan, and Guven Guzeldere. The Nature of Consciousness: Philosophical Debates.
Cambridge, Massachusetts. The MIT Press. 1997.
Cauller, L. “Layer 1 of Primary Sensory Neocortex: Where Top-down Converges Upon Bottom-up.” Behavioural Brain
Research, 71: 163-180.
Chalmers, David. “What is a Neural Correlate of Consciousness?” Neural Correlates of Consciousness, 17-40.
Churchland, Paul. The Engine of Reason, the Seat of the Soul: A Philosophical Journey Into the Brain. Cambridge,
Massachusetts. The MIT Press. 1995.
Churchland, Paul, Churchland, Patricia. On the Contrary: Critical Essays, 1987-1997. Cambridge, Massachusetts.
The MIT Press. 1998.
Churchland, Paul. Scientific Realism and the Plasticity of the Mind. Cambridge, England. Cambridge University
Press. 1979.
Damasio, Antonio. “Time-locked Multiregional Reactivation: A Systems-level Proposal for the Neural Substrates of
Recall and Recognition.” Cognition, 33: 22-62.
Edelman, Gerald, Giolio Tononi. “Reentry and the Dynamic Core: Neural Correlates of Conscious Experience.”
Neural Correlates of Consciousness, 139-152.
Flohr, Hans. “NMDA Receptor-mediated Computational Processes and Phenomenal Consciousness.” Neural
Correlates of Consciousness, 245-258.
Libet, Benjamin. “The Neural Time Factor in Conscious and Unconscious Events.” CIBA Foundation Symposium,
174: 123-137.
Metzinger, Thomas. Being No One: The Self-model Theory of Subjectivity. Cambridge, Massachusetts. The MIT
Press. 2004.
Metzinger, Thomas. The Ego Tunnel: The Science of the Mind and the Myth of the Self. New York, New York. Basic
Books. 2009.
Metzinger, Thomas. Neural Correlates of Consciousness. Cambridge, Massachusetts. The MIT Press. 2000.
Penrose, Roger. Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford University Press.
New York, New York. 1994.
Revonsuo, Antti. “Prospects for a Scientific Research Program on Consciousness.” Neural Correlates of
Consciousness, 57-76.
http://www.scholarpedia.org/article/Complexity
http://www.scholarpedia.org/article/Hard_problem_of_consciousness
http://www.scholarpedia.org/articles/Models_of_Consciousness
http://www.scholarpedia.org/article/Neural_correlates_of_consciousness
http://www.scholarpedia.org/article/Self_models
http://www.scholarpedia.org/article/Teleofunctionalism
https://en.wikipedia.org/wiki/Mind–body_problem
Adam Kisby
There is a sense in which the Principle of Testability is the sine qua non of science. Insofar as
science is a process whereby the map of theory is brought into conformity with the territory of
data, testability would seem to be a necessary component of the scientific method. Of course,
testability sometimes entails comparing areas of the map to other areas of the map or bits of the
territory to other bits of the territory, but testability in the present context refers to comparing
areas of the map to bits of the territory for the purpose of modifying theory when it does not
match its corresponding data (cf. Popper, pg. 9). Sociologist of science Marcello Truzzi’s idea of
testability refers to empirical testing, checking ideas against experience. This idea of testability
goes back at least to Aristotle, who asserts that “experience is what a normal observer […]
perceives under normal circumstances” (Feyerabend, pg. 109). For Truzzi, the Principle of
Testability requires that claims be both verifiable (in the sense of Francis Bacon) and falsifiable
(in the sense of Karl Popper). This commonly accepted version of the principle is examined
here.
Although Aristotle includes a formalization of empiricism in the Organon, much of science by the
time of Francis Bacon had lapsed into unalloyed rationalism, such that Bacon is prompted to
say that the inductive method, which he understands to be the defining characteristic of
empiricism, “has not been tried.” At that time, scientific reasoning was almost exclusively
deductive rather than inductive, and Bacon objects that any conclusions reached by deductive
reasoning could be no more certain than the axioms from which they were derived. [Ed. note: A
priori propositions are made independent of experience whereas a posteriori propositions are
made in relation to experience; thereby the latter are more probabilistic, like inductive
reasoning.] To remedy this situation, Bacon insists in the Novum Organum that all axioms must
be elicited “from sense and particulars, rising in a gradual and unbroken ascent.” He describes
this method as the “true way” (McGrew, pg. 191). British philosopher John Stuart Mill greatly
develops the inductive method, noting that “all discovery of truths not self-evident, consists of
inductions, and the interpretation of inductions: that all our knowledge, not intuitive, comes to us
exclusively from that source” (Mill, pg. 207). Possibly the strongest argument in favor of the
inductive method is that even the rules of deductive reasoning appear to have been formulated
by inductive means. Despite the apparent certainty that verifiability in the form of the inductive
method confers on scientific knowledge, there are objections to its inclusion in the scientific
method.
The first objection to verifiability is that there is no unambiguous historical justification for its use.
Thomas Kuhn states, “We often hear that [scientific discoveries] are found by examining
measurements undertaken for their own sake and without theoretical commitment. But history
offers no support for so excessively Baconian a method” (Kuhn, pg. 28). Kuhn’s objection, then,
is that Bacon’s way still has not been tried, so there is no way of knowing if its inclusion in the
scientific method is warranted.
The third objection to verifiability is that the problem of induction has not been solved. Where
inductive reasoning is defined as passing “from singular statements (sometimes also called
‘particular’ statements), such as accounts of the results of observations or experiments, to
universal statements, such as hypotheses or theories,” no number of ‘particular’ statements,
however large, can ever logically justify the acceptance of any universal statement (Popper,
pgs. 3-4). For example, “no matter how many instances of white swans we may have observed,
this does not justify the conclusion that all swans are white,” because a single observation of a
black swan would disprove that conclusion. Karl Popper rejects attempts by David Hume and
Immanuel Kant to justify inductive reasoning. Hume’s attempt consists of an inductive argument,
so “we should have to assume an inductive principle of a higher order; and so on. Thus the
attempt to base the principle of induction on experience breaks down, since it must lead to an
infinite regress” (Popper, pg. 5). And because Kant’s attempt consists of positing the Principle of
Induction as an axiom rather than providing a logical argument, Popper has only to point out
that there is no compelling reason for positing such an axiom in the first place (Popper, pgs.
5-6).
The first reason for questioning falsifiability is that any given theory is inconsistent with some
datum. Kuhn puts it this way: “If any and every failure to fit were ground[s] for theory rejection,
all theories ought to be rejected at all times” (Kuhn, pg. 146). Feyerabend objects to the
inclusion of falsifiability in any scientific method on this very basis: “The demand to admit only
those theories which are consistent with the available and accepted facts […] leaves us without
any theory […] The right method must not contain any rules that make us choose between
theories on the basis of falsification” (Feyerabend, pg. 50-1).
The second reason for questioning falsifiability is that there may be disagreement among
scientists over some datum that is alleged to falsify a given theory. If some number of data were
required to falsify the theory, then there would remain the problem of disagreement among
scientists over what that number should be. A scientist may make the argument that the
observation of a few black swans in Australia “may itself be trivial and the remainder much more
important,” for “it is still the case that, so far as we know, all swans except the black ones in
Australia are white” (Stevenson, pg. 258). In such cases, it might be more reasonable to modify
a theory than to reject it entirely. Popper concedes that he would allow for this sort of
modification, but only if the resulting theory were more falsifiable than the original theory
(Popper, pg. 20).
The third reason for questioning falsifiability is that the “belief that only falsifiable ideas are
scientific may snuff out innovative ideas before they have had a chance to survive testing”
(Stevenson, pg. 258). Popper’s main reason for rejecting verifiability and proposing falsifiability
is that “[verifiability] does not provide a suitable ‘criterion of demarcation’ [between science and
pseudoscience]” (Popper, pg. 11). Charles Darwin’s theory of evolution by natural selection was
rejected by Popper as unscientific, but it has proved to be an enormously fruitful, albeit
incomplete, theory. Fort’s opinion of Darwin’s notion of survival of the fittest is similar: “There is
no way of determining fitness except in that a thing does survive. ‘Fitness,’ then, is only another
name for ‘survival.’ Darwinism: That survivors survive” (Fort, pg. 24). More recently, Popper
admits that his rejection of evolutionary theory “was perhaps going too far” (Horgan, pg. 38).
Fort also acknowledges that “[Darwinism’s] attempted coherence approximate[s] more highly to
Organization and Consistency than did the inchoate speculations that preceded it” (Fort, pg.
24). Ultimately, Popper explains that his criterion of demarcation “separates two kinds of
perfectly meaningful statements: the falsifiable and the non-falsifiable” (Popper, pg. 18). As
such, falsifiability is a metaphysical principle that is not subject to falsification, so the objection
that falsifiability does not “satisfy its own criteria” is, to quote Popper, “one of the most idiotic
criticisms one can imagine!” (Horgan, pg. 38) In practice, though, labeling a theory as
pseudoscientific results in its automatic rejection by large sectors of the scientific community.
The first general criticism of the Principle of Testability is that testability is often naïvely assumed
to be equivalent to predictability, and that the role of predictability in science subsequently
becomes overemphasized. At its extreme, this overemphasis becomes a belief that predictability
is “essential to the process of empirical testing of hypotheses, the most distinctive feature of the
scientific enterprise” (Stevenson, pg. 260). Ian Stevenson quotes renowned philosopher of
science Rudolf Carnap as saying, “The supreme value of a new theory is its power to predict
new empirical laws” (Carnap, pg. 260). Fort reads “over and over that prediction is the test of
science” but concludes that the ability of a theory to predict a phenomenon does not mean that
that theory has explained the phenomenon: “Take for a base that the earth moves around the
sun, or take that the sun moves around the earth: upon either base the astronomers can predict
an eclipse” (Fort, pg. 713). John Stuart Mill, who supports verifiability, states that “predictions
and their fulfillment are, indeed, well calculated to strike the ignorant vulgar” (Stevenson, pg.
260). Thoughtful consideration of the problem leads to the conclusion that the ability of a theory
to predict a phenomenon is not especially meaningful.
The second general criticism of the Principle of Testability is that testability requires
unambiguous data, but that data are never unambiguous. Philosophers of science speak of
theories as being “underdetermined” by data, or they say that data are “theory-laden” (e.g.
Shermer 46). This aspect of the theory-data relationship is most clearly expressed by Kuhn:
“Philosophers of science have repeatedly demonstrated that more than one theoretical
construction can always be placed upon a given collection of data” (Kuhn, pg. 76). Popper, who
might have argued that falsifiability also requires unambiguous data, acknowledges that
“nothing is easier than to construct any number of theoretical systems which are compatible with
any given system of accepted basic statements” (Popper, pgs. 264-265). Fort observes that
“only logicians think that anything has any exclusive meaning” and posits that “everything that
ever has meant anything has just as truly meant something else” (Fort, pg. 867). Fort provides
an entertaining illustration when he challenges the commonly accepted notion that “the round
shadow of this earth upon the moon proves that this earth is round” by pointing out that “if this
earth were a cube, its straight sides would cast a rounded shadow upon the convex moon”
(Fort, pg. 346). Truzzi disapproves of those who “argue, like Lombroso when he defended the
mediumship of Palladino, that the presence of wigs does not deny the existence of real hair”
(“Pseudo-Skepticism,” pg. 4). Nevertheless, it remains a perfectly valid form of argument.
[Ludwig Wittgenstein: “Why do people say that it was natural to think that the sun went round
the Earth rather than the Earth turned on its axis?”
Elizabeth Anscombe: “I suppose, because it looked as if the sun went round the Earth.”
Ludwig Wittgenstein: “Well, what would it have looked like if it had l ooked as if the Earth turned
on its axis?” -Ed. Note]
The third general criticism of the Principle of Testability is that testability requires a criterion of
inclusion and exclusion, but that all such criteria are necessarily arbitrary. Fort says that “no
basis for classification, or inclusion and exclusion, more reasonable than that of redness and
yellowness has ever been conceived of” (Fort, pg. 5). Fort goes on to say that all criteria are
continuous in the same way that redness and yellowness are continuous in orangeness, i.e.,
These objections to verifiability, reasons for questioning falsifiability, and general criticisms of
testability suffice to establish that the commonly accepted version of the Principle of Testability
is inconsistent with reason and frustrates scientific progress. Consequently, the Principle of
Testability, in its present form, is not essential to the process of scientific discovery and should
not be considered a necessary component of the scientific method. In the sense that testability,
broadly construed, is the sine qua non of science, it must be modified to accord with reason.
Sources:
Bauer, Henry. Science or Pseudoscience: Magnetic Healing, Psychic Phenomena, and Other
Heterodoxies. Chicago, Illinois: University of Illinois Press, 2001.
Calaprice, Alice. The New Quotable Einstein. Princeton, NJ: Princeton University Press, 2005.
Feyerabend, Paul. Against Method. New York, New York: Verso, 1997.
Fort, Charles. The Complete Works of Charles Fort. New York, New York: Global
Communications, 2009.
Kuhn, Thomas. The Structure of Scientific Revolutions. Chicago, Illinois: University of Chicago
Press, 1996.
McGrew, Timothy, Marc Alspector-Kelly, and Fritz Allhoff, Eds.. Philosophy of Science: An
HistoricalAnthology. Oxford, UK: Wiley-Blackwell, 2009.
<http://www.gutenberg.org/files/27942/27942-h/27942-h.html#toc43>
Popper, Karl. The Logic of Scientific Discovery. New York, New York: Routledge, 2002.
Shermer, Michael. Why People Believe Weird Things: Pseudoscience, Superstitions, and Other
Confusions of Our Time. New York, New York: Holt Paperbacks, 2002.
Stevenson, Ian. “What are the Irreducible Components of the Scientific Enterprise?” Journal of
Scientific Exploration 13.2 (1999): 257-70.
“The Perspective of Anomalistics.” Encyclopedia of Pseudoscience. New York, New York: Facts
on File, Inc., 2000.
ABSTRACT
Part eleven of eleven of the comprehensive interview with Rick G. Rosner: Giga Society
member, ex-editor for Mega Society (1991-97), and writer. He discusses the following
subject-matter: Genius of the Year Award – North America in 2013 from PSIQ and clarification
of statements; definition of the term “gods” in operational terms from the award statement;
discussion on our future rather than gods; thoughts on aesthetics within an informational
cosmology lens; some brief discussion on informational eschatology; human history’s numerous
examples of individuals and schools of thought aimed at absolute definitions of consciousness,
universe, and their mutual union; thoughts on Big Bang Cosmology and the possibility of its
replacement; three greatest mathematicians/physicists/cosmologists; three greatest
mathematics/physics/cosmology concepts; The Heisenberg Uncertainty Principle and
Wave-Particle Duality; Einstein-Podolsky-Rosen ( EPR) Nonlocality; possibility of universe
operating in something more essential than information; everything in essence equate to a
Turing Machine in informational cosmology; operation of different time depending on
armature/universe in reference; mysteries; ex nihilo cosmogony; theology becoming
informational cosmology and vice versa; informational ethics in relation to numerous ethics; The
Problem of Evil; souls; Fr. Teilhard de Chardin, The Phenomenon of Man (1955), Omega Point,
and The Future of Man ( 1964); work needing doing for Informational Cosmology; reflection on
theorizing and outlier background; common sense and intelligence; regrets; ethics of forming,
joining, and sustaining elite groups based on high and ultra-high IQs; harsh internet crowd,
frequent comments, and responses; principles of existence as the language of existence with
explicit listing of some of them; and thoughts on prevention of intellectual theft.
99. You earned the Genius of the Year Award – North America in 2013 from PSIQ. In your
one-page statement on winning the award, you say, “My one wish is that trying to extend human
understanding is doing God’s work.” In some sense, there seems no higher calling than
something akin to an internal – to the cosmos – teleological duty to assist the self-actualization
of the universe as sub-systems, various individual POVs, within the universe in service of God.
Does this fairly characterize the statement? If not, what did you attempt to address with such a
statement?
I was addressing a strain of religiosity which is hostile to science (or which misrepresents
science to advance an agenda). I would like fewer people to be anti-science and would like
people to be less subject to anti-scientific manipulation on religious grounds.
Isaac Newton thought that by making mathematical and scientific discoveries, he was doing
God’s work. I like the idea that figuring out how the world works and how to make it better is
helping God, not defying God.
Humans are part of a world we can choose to believe was created by God. Doing science isn’t
alien to the world or opposed to God.
[Editor’s Note: “I maintain that the cosmic religious feeling is the strongest and noblest motive
for scientific research.” -Albert Einstein]
Teleology isn’t a word that I embrace, because it can be used to sneak creationism into
evolution. Evolution, of course, isn’t a purposeful progression towards complexity. Rather, it’s
the proliferation of varied organisms via the occupation of exploitable niches, some of which are
occupied by organisms having complex abilities. (But simple organisms continue to occupy their
niches. And new, simple organisms continue to arise.)
The universe is a very complicated entity, and as such, demonstrates that highly complex
entities are permitted by the principles of existence (whatever those turn out to be). Can we help
our species, our planet, or even the universe itself self-actualize, and if so, is this some kind of
built-in bias towards complexity? Maybe, but I don’t see it as the hand of the Creator nudging us
towards glory. Rather, I see it as the possibility of mathematical teleology, with complex entities
perhaps statistically tending to have histories of increasing complexity. There is room for God or
gods in this, but gods who are subject to the same principles of existence that we are. Which
isn’t the worst thing – we are all striving, humans and gods alike.
Start with the Arthur C. Clarke quote that’s now so overused it’s a cliché – “Any sufficiently
advanced technology is indistinguishable from magic.” There are around a quarter or a third of a
trillion stars in the galaxy. A bunch of them have planets – there are tens of billions of planets in
the Milky Way – maybe 100 billion, maybe 200 billion or more. Even if only one in 10,000
contains life, that’s still 10 million planets with life. (And there are a hundred billion galaxies in
the universe.) Some must have intelligent life, and on some of these planets, tech-wielding life
most likely has a huge head start on us (because the odds of us being the first to tech in the
galaxy are one in however many tech civilizations there will eventually be). Even if it’s only a
thousand-year head start, that’s huge with regard to tech. And it’s possible that tech-wielding life
on some planets might have a billion-year head start. So it’s reasonable to assume that there
are some civilizations which are so advanced, their powers are almost magical in comparison to
ours. But to call them gods is something of a cheat – super-advanced civilizations that have
arisen in the past 14 billion years might best be called godlike.
Super-advanced civilizations would be able to do awesome stuff – for instance, possibly defy
time to some extent by simulating a plethora of possible futures (on a rolling basis) and
choosing the best future from among them. At the very least, advanced civilizations will have
vast computational capacities. And the business of the universe is computation.
Next step in the hierarchy of godlike beings – let’s say I’m correct that the universe is vastly
older than 14 billion years. It’s not unreasonable to think that some civilizations have learned
how to survive galactic cycles, perhaps by hiding out in the enormous black hole-like objects at
the centers of galaxies or by hopping from exhausted galaxies to newer galaxies (if it’s even
possible to travel fast enough to escape a collapsing, exhausted region of the universe – hey,
maybe they could beam themselves via neutrinos). Civilizations (or entities) which can survive
for many multiples of 14 billion years would have fantastic capabilities – they might actively
participate in the running of the universe – beaming neutrinos at the burned-out galaxies they
want to reactivate, for example. Is it so unreasonable to think that something as large and old
and intricate as the universe might have intelligent entities helping to manage it? Such entities
might almost deserve the title of gods.
And the next step in the hierarchy – what if the universe itself is an entity, with perceptions,
thoughts, and objectives, playing out across octillions or decillions of years? That is –
That entity deserves to be called a god, but a god that did not make us, that may not know we
exist, and that doesn’t intercede in our affairs. [Similar to deism. -Ed. Note] We are made of its
And beyond the universe we live in is the universe in which the entity whose information space
we live in itself lives. Maybe it’s not turtles all the way down; maybe it is information spaces all
the way up.
These different levels of goddish beings share with us the basic constraints of existence.
They’ve almost certainly developed work-arounds for many of these limitations, but they share
the same general characteristics, even if such characteristics have been obscured and
weirdified by their godlike mastery of physical processes. It’s kind of nice that in wrestling with
existence, we and these gods are all in it together.
The various gods certainly have consciousnesses which are more powerful, more detailed, and
encompassing more senses and types of analysis than ours. But who knows if the differences in
consciousness are more than differences in magnitude, perceiving space and time in ways that
are fundamentally different?
People aren’t freaked out enough about the future. Have I already said that? Humanity will be
forced to change – to embrace new, weird forms of thought. Here’s why – advanced artificial
intelligence is coming. It will be hard and perhaps impossible to design AI so that it doesn’t want
stuff for itself. It won’t just be our faithful servant. So we’re gonna have to keep up with it – we’ll
need to be joined to AI, so that we remain, for as long as possible, among the smartest beings
on the planet. When occupying niches, species tend not to limit themselves. External factors
limit how far species expand. Similarly, if it’s us versus AI in a struggle to occupy the same
niches, the smarter entities will overpower the weaker ones. We can’t program AI to limit itself –
it’s too likely that any barriers will spring leaks.
We’ll need to develop and evolve a worldwide (and eventually a solar system-wide) ecosystem
which incorporates AI. That is, we’ll need to develop durable forms of advanced intelligence
which don’t just ravage all available matter for computing purposes. It doesn’t seem
unreasonable that AI and humans-plus-AI will eventually find niches that don’t threaten the
existence of all other life on earth. But that probably won’t happen unless we keep up with AI by
augmenting ourselves with it.
The world will be flooded with AI cops – software, hardware, etc. that will spy on everything to
make sure that hyper-destructive AI and nanotech don’t get loose and destroy everything. There
will have to be cyber cops on top of cyber cops – like an immune system – trying to keep
outbreaks of bad AI local. Privacy will be left in tatters. (This could be an unrealistic science
fiction TV show set 20 years in the future. A squad of sexy cops fight bad AI and nanotech.
Perhaps make it a comedy, so the glaring errors can be seen as funny instead of stupid.)
We’ve been talking about ethics. Throughout history, humanity has had generally agreed-upon
ethics for the protection of life and property and sometimes freedom, based on what humans
want – comfort and safety. Such protections don’t extend far beyond humans, and we’ve found
little evidence of the world itself having any ethical expectations. Our ethical framework is about
to be completely revamped. Consciousness will be quantified. Consciousness will be created in
non-living beings. Unaugmented human intelligence will no longer dominate the planet. Ethical
arguments will have to be more powerful, to persuade our far brighter descendants.
Ethical protections have extended from the self-appointed most special beings on Earth,
humans, to, often grudgingly, other humans and sometimes to animals, the environment, and
objects of historic value. Within 40 years and probably much sooner than that, unaugmented
humans won’t be the smartest, most talented known beings. Unaugmented consciousness will
be shown to be unimpressive in many ways. Winds of change will buffet the ethical umbrella,
and we don’t know who or what will be under it in 2060.
Narrative is important. We like stories. And stories are an essential part of the structure of
history. Just about every development in evolution and history involves someone or something
embracing change – often being the first to make a change. We offer people, animals, and
things ethical protection when we recognize and understand their stories. We have to sell the
future on the importance of unaugmented humans’ stories, even when the augmented are in
charge.
There are already some good timelines of the future. Ray Kurzweil’s timelines might be the most
well-known. He’s been making them since 1990, so you can judge how he’s done in his first 25
years of predicting. And this is a through, non-lunatic timeline –
Don’t know if I can do this. What I know is a bunch of stuff is gonna get weird and perhaps go
away. Pro and Olympic sports will get weird in the next century as human bodies become
2080: People commonly have relationships with artificial people, who by the early 22nd century,
have acquired limited rights.
Money is gonna get weird. Some human necessities will continue to get cheaper. Employment
will decrease. The life cycle of commercial enterprises will accelerate, making investment weird.
By the mid-22nd century, everything associated with human life as we’ve known it for thousands
of years gets weird as we have increasing choice of what should contain our minds and of the
form of consciousness itself. You could call the 2100s the Century of Choice. Dibs on that.
It’s also the century of fragmentation, as new choices of how to live lead to different societies
and sects and enclaves. After this, it’s hard to say what happens, because you can’t predict
what the prevalent forms of consciousness will be.
The mental isolation that humans have always felt – that we are separate, autonomous
individuals – will be eroded. We already have close working relationships with our devices, and
we’ll increasingly be nodes in a network of streaming information as everything in our world gets
packed with computing (and eventually thinking) circuitry.
Just remembered – made this list in 2013 as part of a pitch to Grantland – it’s everything I
thought would be going away.
Children (Currently, about 85% of humans have children. By 2090, less than 30% of humans
will have reproduced traditionally by the age of 60.)
Risk and wrecks (People who might live for many centuries won’t tolerate current levels of risk.)
Humans’ exalted view of ourselves (We’re gonna learn exactly how we work, and we’ll find it not
so awesome.)
The soul (We’ll have a mathematical model of how we feel that we have feelings. This will be a
good thing, but it won’t feel so good. Understanding consciousness could add an underlying
sadness to the world until people get used to it.)
Basic human concerns and drives (We’re gonna be able to rejigger the agenda that evolution
has wired into our heads.)
TV and movie storylines as we know them (All our entertainment is built around basic human
drives. Once we start messing with these drives, we have to mess with our stories. Romance,
action, comedy, drama, etc. all get reworked.)
Thinking we know what’s going on a moment-to-moment basis (Our awareness is really patchy
and cobbled together, but evolution doesn’t give a crap. Evolution wants us to have enough
awareness to survive and reproduce. Anything beyond that is a bonus.)
Privacy
Disease
Island consciousness (that is, not being able to link your brain to someone else’s)
Abject poverty and ignorance (except among angry, fucked-up, repressed populations)
Unhealthy food (Food that tastes great won’t actually be bad for you.)
No time travel, except through simulation (which will grow more and more powerful, but still
won’t let you change the past).
Probably no war between galactic empires. Empires don’t get you much – there’s no rare stuff
that can only be had on a certain planet. I guess civilizations might fight for control of large
bodies such as a neutron star that has neutrino jets or a black hole at a galactic center (which
might be good for vast amounts of computing). They won’t be fighting over worm poop that
helps you steer spaceships. According to many futurists, advanced civilizations just want to stay
home and compute – kinda like us with our smartphones.
We’ll eventually encounter other civilizations. I’m guessing finding alien life will be like dating
and marriage – initial excitement followed by vaguely interested familiarity.
And finally, a rule of thumb. In the 21st century, the percent weirdness of daily life roughly equals
the last two digits of the year. The year 2015 is 15% weird. (We spend all day staring at
screens. We have access to all information, and we constantly share information via social
103. Any thoughts on aesthetics within your framework for understanding the world?
Conscious beings are driven by pleasure (and pain). Pleasure is associated with things that are
important to survival and reproduction. Perhaps more than any other species, humans get
pleasure from learning, because our niche is discovering exploitable regularities in the world.
We get aesthetic pleasure from representations of things associated with pleasure, especially
when those representations offer a satisfying hint of discovery or problem-solving.
Kitsch and porn pander to pure pleasure without the learning, while art offers at least the
suggestion of learning how to decode the world. At its best, the beautiful also offers insight.
Endorphins shape learning. Jokes are funny because they simulate an abridged learning
process. We enjoy music because it sets up expectations of patterns and then fulfills those
patterns. (And the rhythm sets up a framework that can keep us in the moment.) Familiarity in
our surroundings and predictability in our sensory input helps structure our awareness – we’re
all a little like the guy in Memento.
The universe will likely largely stay the way it is for trillions upon quadrillions upon quintillions of
years. However, our galaxy will burn out and fall away from the active center after, I dunno,
another ten billion years or so. (Astronomers say the Milky Way and the Andromeda galaxy will
collide and merge in another five or so billion years, but that’s not the issue. It’s when the
merged galaxy’s stars burn out that it falls out of the active center.) Perhaps advanced
civilizations have ways of surviving the burning-out of a galaxy to persist for more than just tens
of billions of years. For us, with our puny conception of things, tens or hundreds of billions of
years might as well be forever. When and if the universe does end, probably does so through
heat. Heat is noise and loss of information. The temperature of the cosmic background radiation
increases and sizzles everything away. The currently active center runs out of juice and falls
back into the hot background like Schwarzenegger being lowered into the molten steel in
Terminator 2.
Of course, for us, the idea of a civilization or entity lasting for billions of years is inconceivable.
How could an entity develop and accumulate knowledge for the equivalent of a million lifespans
of our current civilization? Well, maybe it doesn’t. Maybe it hits a ceiling of knowledge. Maybe
it’s like a security cam setup that keeps only a rolling record of the past 24 hours. At this point,
with knowledge of only one civilization that’s only 10,000 years old, we have no way of knowing.
Rather than pontificate on broad historical patterns, for brief and mundane historical examples,
earliest known individuals with works focused on the gods such as Hesiod with Theogony,
which went through the traditional Greek mythological timeline including the triumphs of Cronos
over Ouranos and Zeus over Cronos.
Other sets of individuals comprising schools focused on the schools of philosophy with less
focus on gods and more focus on forces of nature. The Milesians took different fundamental
compositions of the world while removing the place of the gods with Thales (Water),
Anaximander (Apeiron or the indefinite, infinite, unlimited), and Anaximenes (Mist, air, or
vapour). Each with views different from before, but monistic (non-plural) and material as
opposed to plurality of gods and their caprices. In particular, the worldview of Thales because
of the transition between the world of the mythological, allegorical, and metaphorical of Hesiod
into the world of reason.
Some of these cosmological speculative philosophies gave rise to political and moral
philosophy. These speculations continued to lack comprehensive integration, even with the
question-based philosophies of Socrates and the Sophists. Plato and Aristotle provided the
most thorough accounts of a comprehensive philosophy covering numerous subjects over
many, many writings. This continued onward to the present day with individuals attempting
unification such as David Deutsch, David Chalmers, Edward Witten, Stephen Hawking, and so
on. Many bright lights in history. How do you assess or grade the attempts at absolute
definitions of phenomena such as consciousness?
For most of human history, people made all sorts of wrong guesses about the nature of
consciousness. It feels so ineffable and deeply, transcendently real – it has to be a bridge to
some kind of ethereal beyondness, right? After millennia of this, consciousness has a bad
reputation for being associated with la-de-dah mysticism. Mention consciousness, and people
get nervous that you’re gonna argue that rocks and trees and entire planetary surfaces are
conscious. [e.g., David Chalmers’s panpsychism -Ed. Note]
106. What makes the Big Bang so convincing? Is it at risk of being replaced?
The Big Bang is convincing for lots of reasons. It’s by far the most widely accepted theory of
cosmogony among scientists. However, it’s only held this position for the past 50 years. Before
the discovery of the Cosmic Microwave Background radiation in 1964-65, it was neck-and-neck
between Big Bang and Steady State Theory, which postulated that matter popped into existence
in empty space. And before Big Bang and Steady State Theory originated as a consequence of
general relativity and Hubble’s Law in the 1920s, we didn’t know enough about the large-scale
dynamics of the universe for any effective theorizing that I’m aware of.
The discovery of Cosmic Microwave Background radiation was dramatically convincing. In 1964,
some guys at Bell Labs built a radio telescope which picked up low-temperature noise they
couldn’t explain. They thought it might be bird poop on the antenna. Turned out to be light from
the early universe as predicted by the Big Bang. Game, set, match for Big Bang Theory.
The Big Bang explains a lot – the apparent velocities of billions of galaxies, the formation of
heavy elements, the size and apparent age of the universe, the proportions of elements found in
the universe, the relative youthfulness of more distant galaxies.
It’s conceptually easy – one big explosion, everything flies apart. Has a catchy name. Is the title
of the biggest sitcom on TV.
But it doesn’t explain enough. It minimizes cosmic questions, with the main question being, why
is nothingness so volatile that it explodes into an entire enormous universe? With enough
tweaks, Big Bang theory can explain the mechanics of how the universe exploded out of
nothingness, which is kind of satisfying from the point of view of physics, but not of philosophy.
It leaves too many physical constants unexplained – the proton-electron mass ratio and dozens
more. The Big Bang in general is not overly explanatory – it only tells you why some stuff is the
Big Bang Theory incorporates assumptions of uniform conditions and constants across the
entire universe. This is usually seen as a theoretical strength, but, like the unexplained physical
constants, Big Bang theory doesn’t completely justify why the universe should be uniform. The
philosophical reason, called the cosmological principle, is that we on earth are located nowhere
special in the universe, and furthermore, the entire universe is nowhere special. This is a
dangerous assumption. You can’t just demand that the universe be roughly the same
everywhere. What if that’s not how the universe works? The Big Bang has that assumption built
in. And while the Big Bang assumes uniformity in space, it does no such thing in time. There is
no uniformity across time in Big Bang theory – every observer is located at a unique moment in
the universe’s unfolding.
Some of universe’s spatial uniformity is explained by cosmic inflation in the very early universe.
According to cosmic inflation, the universe expanded so fast (blowing up by a factor of at least
10^26 in less than 1/10^32nd of a second – that is, doubling in size every
1/10,000,000,000,000,000,000,000,000,000,000,000th of a second or so) that a tiny volume
without much room for variation became the entire visible universe, and the rapid expansion
also spread out any irregularities. The reason for such rapid inflation isn’t known, so cosmic
inflation is a little ad hoc.
Beyond cosmic inflation, the Big Bang requires more and more precise, fussy tweaks to agree
with increasing amounts of observational data. One would hope that there would be a theory,
either an add-on to Big Bang theory or an alternative, which would explain more of the
conditions of the universe without having to be tweaked to fit the conditions of the universe.
Our galaxy contains globular clusters – tight groups of a million or so stars – which may be older
than the Big Bang. Calculations are pretty equivocal on this – the clusters might not be that old.
Meh to the clusters.
Yeah, the Big Bang is in danger of being supplanted. It’s pretty much our first try at a theory of
the universe based on not-hopelessly-incomplete observational evidence. Even though the Big
Bang is young, it’s already accumulated a bunch of patches.
A digression –
Was up late last night, thinking about how active galaxies get to the active center. They can’t
just light up and slide into the center – what would cause the slide? And they can’t just slide out
of the center when burned out. I’m thinking maybe it looks like soap bubbles – lit-up galaxies
expand enough of the surrounding space that bubbles would be too big not to merge. There
wouldn’t be walls between bubbles – that’s incorrectly extending the analogy – but there would
be dark galaxies along the saddles between bubbles. Without being able to contribute to the
photon flux that keeps the active center inflated, maybe dark galaxies would slide along the
I suppose this would mean you could temporarily be of two minds – thinking of two things
somewhat independently – having a pair of incompletely merged active centers in your
mind-space – until your thoughts merge. While driving, you’re trying to remember your
second-grade teacher when another driver forces you slightly out of your lane. Your thoughts
about your split-second evasive driving maneuver don’t necessarily disrupt your thoughts about
second grade. Each pattern of thought informs itself more than it informs the other, unless you
then ponder your bifurcated thinking during the incident.
Darwin is one of my favorite cosmologists, even though he’s not a cosmologist. He took the idea
of deep time, which was being debated by geologists of his era, and applied it to biology, which
indirectly set the stage for the discovery, 60 years later, that we live in a universe that’s many
billions of years old. Some physicists of Darwin’s time argued against deep time, saying stars
couldn’t last that long. The longevity of stars wasn’t explained until the discovery of nuclear
fusion.
Newton was the first to describe gravity as the force holding all large objects together, which is a
necessary first step in a conceptual framework that encompasses the entire universe. And
Einstein made that framework much more explicit.
Also important are the developers of theories of information, including Alan Turning and Claude
Shannon.
I like Mach’s Principle, which states that inertia arises from an object’s interaction with the stellar
background (all the matter in the universe). Mach’s Principle has never been turned into a
precise mathematical theory, but it’s still compelling. If true, Mach’s Principle can’t mean that an
object is directly interacting with all matter as that matter is now, because of the speed of light.
The object has to be interacting with its local inertial field which is created by all matter, but with
matter’s contribution to the field delayed by distance, the same way we can see all the visible
stars in the universe but only as they were in the past.
Quantum mechanics is powerful, especially when viewed as the universe observing and
defining itself.
And relativity, both special and general and including Big Bang cosmology, is essential,
particularly when considered as aspects of how information is structured and how it behaves.
Uncertainty and wave-particle duality are aspects of a finite universe having a finite capacity to
define itself. Particles will be fuzzy. Say you’re playing roulette, one chip at a time. The best you
can do, on average, based on whether your chip pays off (and nothing else), is pin down the
number that came up to somewhere among half the numbers on the wheel. The universe is like
that – it doesn’t have an infinite number of chips to lay down to see exactly what comes up. Or
have an infinity of photons for particles to exchange with each other. (Though one difference
between the universe and blind betting and roulette is that an incompletely observed quantum
roulette ball lands in all possible slots. The information isn’t there-but-hidden – it’s just not there.
Black pays off – well, the ball’s probability wave occupies all the black slots (unless observed to
occupy a specific slot). The universe moves on.)
The universe writes its own history moment by moment. But history is always incomplete. Under
the uncertainty principle, you can pin down some aspects of things with as much precision as
you want, but this will always be at the expense of other aspects. We’re used to feeling that the
universe has great solidity and precision because at our macroscopic scales, it does. Our
bodies contain nearly 10^28 atoms. We’re big, compared to atoms. We don’t generally perceive
atomic-scale lack of precision. We’re the beneficiaries of living in a universe with something like
10^80 particles, which define each other pretty precisely but not infinitely so through their
interactions.
Inexactly defined particles behave with a certain degree of mystery – of unknown information.
This unknownness takes definite forms – probability waves, etc. Defining how unknownness
and imprecision manifest themselves is the job of quantum mechanics. Patrick Coles, Jedrzej
Kaniewski, and Stephanie Wehner at the National University of Singapore just proved that
wave-particle duality is a manifestation of the uncertainty principle. Dr. Wehner said, “The
connection between uncertainty and wave-particle duality comes out very naturally when you
consider them as questions about what information you can gain about a system. Our result
highlights the power of thinking about physics from the perspective of information.” (Once
co-wrote an adult movie about time travel which included a scientist named Dr. Wiener. This is
not the same Dr. Wiener.)
Existence depends on self-consistency. You can set up situations in the universe in which the
discovery of the value of a variable at Point A implies the value of a linked variable at an
arbitrarily distant Point B. Every particle interaction is a handshake between two points in time
(as seen from points of view that aren’t moving at the speed of light – from the photon’s POV,
no time passes). These handshakes are part of how the universe defines itself and maintains its
self-consistency. The EPR setup links two such handshakes. The unfolding of time is the setting
up and completing of vast numbers of these handshakes.
I don’t know what would be more essential (in a practical sense) than information. Information is
the pure essence of choice with everything extraneous stripped away. In a binary system of
information, it’s just 0s and 1s or whatever you want to call it – apples and oranges, Bens and
Jerrys – but it’s all just the choice between two values – what you call these two values isn’t
included. It’s no-frills.
However, this doesn’t get at the essence of distinct choices, why something can only be true or
not true (Gödel aside), how non-contradiction arises and why it’s the key to existence. We have
to work on the logical foundation of existence, including the existence of information, but in
terms of how the universe does moment-to-moment business, information is a highly efficient
framing device.
While we’re at it, we have to get at the foundation of numbers – how they exist (in an abstract
sense that’s reflected by numbers in the material world) without contradiction and with infinite
precision. The same logical structures of non-contradiction – the infinite choices of and
handshakes between values that allow numbers to work – also allow material existence. (My
article about meta-primes in Noesis begins to discuss the infinite series of choices among
numerical values that make numbers work.)
112. How does everything in essence equate to a Turing Machine in informational cosmology?
A Turing machine constructs a picture of reality one finite step at a time. Any finite process or
system can be mathematically translated into a series of bitwise steps – a series of 0s and 1s.
Multiple Turing machines can be married into a single machine – the Church-Turing thesis
states that any computable function on the natural numbers is computable on a Turing machine.
I’m assuming that the universe (or any information-space) is finite and that possible transitions
between states of the universe are computable (given the input of new information to reflect the
outcome of events that had yet to be resolved). With these assumptions, subsequent events
can be computed by a Turing machine.
113. Where one contained armature/universe equals A2 and another container
armature/universe equals A3, does A2 operate
on a different kind of time than A3?
The armature world and the mind-space world are temporally linked – the mind-space is
reacting in real time, but there’s no coordination of physical processes – between the speed of
light in the armature world and in the mind-space, for instance.
The universe observes and defines itself. It takes information to get information. There’s not an
infinite amount of specification to be spread around. There will always be gaps in knowing. Even
in a deterministic universe, which ours isn’t, you’d need something vastly, hugely huge to model
the universe.
So our knowledge of specifics will always be at risk of being threadbare. But we can hope to
learn more about the general principles of existence. Richard Feynman laid out the possible
paths of future scientific knowledge, something like – we figure out the universe, learning just
about everything there is to know. Or we fail to figure out the universe – it’s just too tough. Or
we keep learning more and more but never learn just about everything because what there is to
know just keeps going and going.
I think we’ll mostly figure out the universe – we’ll develop a pretty good picture of the Whys. Our
knowledge, however, will always be surrounded by a deep metaphysical chasm of not yet
understanding the Whys behind the Whys. There’s no absolute knowledge – there’s just hope.
It’s not an unreasonable assumption that there’s an unlimited amount of stuff to know. There are
reasons behind reasons behind reasons, and we may never get to the rock-bottom essential
nature of things, because there may not be a rock-bottom essential nature. Everything might be
bootstrapped and self-referential and the way it is because it can’t not be the way it is without
being contradictory. You can never precisely draw a fractal or a Mandelbrot set – there’s always
an infinity of little curlicues you’re leaving out. And as you go bigger and bigger and more
complex, there are emergent properties and essential stories too big to be contained in smaller
information sets.
Having a beginner’s understanding of the Whys of the universe is just a first step to learning
how to operate within the universe. There will always be infinitely far to go to figuring everything
out.
115. How does informational cosmology explain ex nihilo cosmogony for the modern form of
nothing defined by science and the modern philosophical/theological kind of “nonbeing”
nothing?
In informational cosmology, there’s a reason in the armature world for a mind-space to come
into existence. Reasons can be anything that creates a wide-angle information processing
system – can be natural, as when our brains form as a fetus grows, could be semi-mechanical,
as with us building future sophisticated robots, could be a spontaneous negentropic process
(which the billion-year evolution of life on earth can be seen as).
Also, the principles of self-defined information-spaces should generate a roughly defined set of
all possible such spaces. If these principles more-or-less completely specify what can exist,
consistent with non-contradiction, then anything that can exist, can’t not exist – that is, must
So, between every information-space having a reason to exist in an armature world that’s
created it and the principles of existence pretty much mandating that information-spaces exist,
you have pretty solid justifications for there not being just nothingness.
116. With universe as mind and theology as study of the nature of God – in large part, theology
becomes informational cosmology, and vice versa. How does this reframe the enormous
discipline of theology?
If widely embraced, informational cosmology would eventually prompt a whole new mess of
unfounded and semi-unfounded belief and misunderstanding. It has a whole set of new and
semi-new hooks on which to hang irrational beliefs.
Even if it becomes an accepted theory, not everyone’s going to believe it. I assume our
semi-artificial selves of a century hence will be pretty scientific in their beliefs, but there will be
many groups that continue to hold traditional beliefs. Figure 14 to 25 billion entities with at least
human-level cognition 100 years from now (could be many, many more if independent,
individual AIs are all over the place). The majority will hold scientific worldviews, but billions of
others will be various degrees of Christian or Muslim or Buddhist.
Informational cosmology contains more Whys than Big Bang theory. Big Bang theory asks you
to believe that nothingness is unstable and wants to explode without much philosophical
justification. I’d think that people would embrace a theory that, if largely verified, offers more
Whys within a scientific framework.
Informational cosmology also offers huge questions to try to answer – is the universe truly
conscious? If so, what’s it up to, and what world contains it? How old is the universe? Can
civilizations survive the recycling of galaxies? Is there a ladder of worlds? What are some of the
other conscious beings scattered throughout the universe up to? Do they participate in the
mechanics of the universe? Are three-dimensional space and one-dimensional time structures
that all civilizations are stuck with? And a zillion more questions. Some people will try to answer
them theologically.
117. If you had the opportunity to look at deep human time in an instant, you would see
antiquity’s graveyard with a small section, where we can find remnants of the great theologians,
and these grand figures of theology lie in the grave with some onlookers – no doubt to join –
around the graveyard; look close, some found in this grave, some at the eulogies, and others to
partake of this cemetery: Abraham Joshua Heschel, Albert Schweizer, Baháu’lláh, Charles
Wesley, Clement of Alexandria, Clive Staples Lewis, Eliabeth Stuart, Gordon Clark, John Calvin,
John Ronald Reuel Tolkien, John Wesley, Jonathan Edwards, Joseph Smith, Jr., Karl Barth,
Ketut Wiana, Leila Ahmed, Marilyn McCord Adams, Martin Luther, Pelagius, Polycarp, Prophet
Muhammad, Saint Anselm, Saint Augustine of Hippo, Saint Francis of Assisi, Saint Ignatius of
With such a deep background into the realm of ethics in the world of theology, informational
ethics provides the basis for theoretical analysis of issues in ethics such as asserted
proclamations on ethics in prior times. Application of CE to each set or subset of proposed
ethics; CE provides the basis for logical analysis of ethics.
How might other pervasive ethics have rational calculation in such a moral calculus from
informational cosmology? How might the longstanding tradition of theology work in such a
framework? How do some vogue – within the timeline of recorded human civilization’s history –
assertions of ethics operate in informational ethics such as Christianity, Confucianism,
humanism, Islam, Judaism, secularism, and so on?
Most ethical implications of informational cosmology probably come from the idea that
everything exists within a framework of (technical-not-mystical) consciousness. Consciousness
is a big deal – it’s the context for everything. At the same time, it is weak – it’s technical, not
transcendent, and it doesn’t transcend death unless abetted by technology. Consciousness is
threadbare, it lies to us, and it’s not everlasting. At the same time, it’s all we have.
We have to assume that respect for conscious beings is important. At the same time, we have
evidence that it’s not. We know pigs are fairly intelligent and have feelings. At this point, only
schmucks would argue that pigs aren’t conscious. (Unless they’re arguing that no living beings
are truly conscious, in which case they’re using a completely different (and schmucky) definition
of consciousness.) We slaughter pigs by the billions, but there’s no proof that this mass killing of
conscious beings leaves a metaphysical stain on the universe.
We can go back to existentialism, that the world is meaningless, so we have to build our own
moral systems. But we’re potentially in a better position than the existentialists confronting a
random, spontaneously arising Creator-less universe that contains no inherent moral values. If
informational cosmology is correct about conscious information-spaces being the framework for
existence that, at least, is a unifying theme for existence. We still have to build our own moral
systems, but there’s a little more to grab onto than the completely random, coldly purposeless,
Big Bang universe.
118. How might this calculate the most difficult issue in the history of theology, The Problem of
Evil?
The deal is, the processes that created us don’t have purpose, and they don’t judge. We’ve
been created by a history of things happening via natural processes. I think we arose instead of
being created by a purposeful being with plans for us. And since there’s no planner to keep
things in line, to make things nice, lots of things can happen, and some of the things that can
happen are horrible. It’s up to us to create moral systems which help us decide good and bad
and up to us to do what we can to minimize the bad. There’s no One in charge; we have to be in
charge of ourselves. But we get some help, in that existence seems to be unpreventable. We’re
in a fight against personal and civilizational and even universal oblivion (our universe, not all
possible universes), but existence itself is undodgeable. Existence isn’t a fluke, and nothingness
is not the default state. There is a fabric of existence (well, not exactly, because where would it
exist? It exists the way numbers exist.), a set (a quite likely messy, not-well-defined set) of
possible moments of existence, because there can’t not be.
Evil, as opposed to bad things happening by accident, involves choice. Something capable of
choice chooses to do something bad or to allow something bad to happen. There’s no deity in
charge who’s allowing bad things to happen. But what about the conscious entities who are so
much bigger than us that they might as well be gods? In the case of the universe itself, it
probably has an idea that the information which comprises its information-space can take forms
which are so complicated that they can include worlds with conscious beings and civilizations.
However, it’s unlikely that the universe would care about beings which are low-level relative to
itself and which do not exist in a form of which it is explicitly conscious, unless such forms
threaten to impede the universe’s information-processing. As for advanced civilizations within
the universe, they seem unlikely to go out of their way to prevent bad things from happening on
our planet.
No one is in charge, neither a Creator nor an agent or ethical system put in place by a Creator.
The universe isn’t concerned about relatively low-level worlds which form in its
information-space. The universe wants its information-space to process information. It’s okay
with, and is largely unaware of, whatever happens to specific negentropic forms taken by the
information in its information-space – that is, us.
For the time being, we’re on our own in building ethical systems and in trying to minimize evil.
Souls exist if you call our conscious selves our souls. If by “soul” you mean a magic ingredient,
not information-based, that transforms an unconscious automaton into a feeling, experiencing
being, then no, I don’t think souls exist. Our consciousness, our feeling that we exist in the
world, is a property of how we process information. It’s not the result of a transcendent soul that
rides unfeeling matter like a little sparkly cowboy or a golden thinking cap on a flesh-and-bone
Roomba.
Our soul is what we’re feeling and experiencing and the incompletely expressed background to
what we’re thinking at any given moment. At any given moment, there’s a lot we don’t
consciously know but are comfortable that we could know if we needed to. Our
moment-to-moment awareness is somewhat rooted in all our stored knowledge (including
feelings associated with that knowledge) that’s only unpacked a little at a time. Our being
accustomed to knowledge-in-waiting, our at-homeness in the world, our not freaking out that we
don’t know everything at every moment, is part of what feels like a soul – a generalized feeling
of self.
We don’t see a painting all at once – we fill it in mentally as our eyes wander over the painting.
Similarly, we don’t know ourselves all at once. We constantly fill in ourselves about ourselves as
our awareness wanders through our stored knowledge. Being comfortable with our normal brain
function is part of feeling we have a soul.
We could even speculate that a feeling of comfort with and complacency about our brain
function – this feeling of self and soul – might be encouraged by evolution, because it wouldn’t
do for every organism to be freaking out over every mental glitch. Consciousness is glitchy, and
we might have a certain optimum level of glitch-blindness that’s consistent with calm, normal
functioning. In people suffering from Alzheimer’s, failure to recognize mental deficits seems to
be fairly common. This could be a manifestation of a normally helpful defense mechanism (or it
could be another symptom – a failure in self-perception caused by the Alzheimer’s itself).
The speed and precision of perception and thought are also a big part of feeling as if we have a
soul. There’s a not-uncommon feeling among people who’ve been on heart-lung machines for
many hours during an operation, called “pumphead” or post-perfusion syndrome. Apparently,
while you’re on the machine, your circulatory system can get gunked-up, and during the month
or so after the operation, your brain becomes clogged and strokey. It becomes harder to think
and concentrate and control your mood. Some people with pumphead describe it as losing their
soul.
120. Father Teilhard de Chardin remains a controversial figure to some. In particular, his ideas
in The Phenomenon of Man (1955) evoked praise, infamy, and even calumny. He had some
ideas of note. Ideas in relation to theology and the world. With rich theological undertones, he
spoke of an Omega Point in the book The Future of Man (1964). Does this idea hold merit in
informational cosmology?
I believe that, as in Omega Point theory, the universe evolves more complicated and effective
ways to process and store information, which can include biological and technical evolution.
However, I don’t believe in the Omega Point’s teleology, that some god-like entity is the engine
of progress, drawing us towards its enlightenment. And evolution doesn’t just progress towards
increased complexity; evolution spreads out across all levels of complexity. Bacteria didn’t
disappear when humans emerged.
Also, if the universe recycles itself across octillions of years, then life within it emerges zillions of
times as a natural consequence of negentropy. (Every solar system is an open, negentropic
system, though life won’t evolve in every such system.) So you don’t have a universe
relentlessly climbing towards higher levels of complexity; you have a universe in which
complexity arises over and over, trillions and quintillions of times. Even if intelligent life arises
only once per galaxy, that’s still 10^11 instances of intelligent life, not even considering the
recycling of galaxies. The universe should gradually grow more complex as it accumulates more
information, but it could operate just fine with an unchanging amount of information, just as we
could.
121. What do you see as still needing to be done with Informational Cosmology?
Informational Cosmology:
Needs testable aspects and testing – it’s not a theory unless it can be tested. Many of its
elements are hard to test observationally – dark matter being collapsed normal matter, there
being a bunch of burned-out galaxies in the neighborhood of T = 0, the universe being many,
many times older than 14 billion years. But these same difficulties pertain to other theories of
dark matter and the large-scale structure of the universe. These theories are often tested via
mathematical modeling, which could be applied to Informational Cosmology. Fortunately
(perhaps), Informational Cosmology is also a model of our minds, which, while not sharing our
physical space, aren’t 14 billion light years away and are amenable to observation.
Needs attention. I’m trying to sell a memoir, Dumbass Genius, about the dumb things I’ve done,
with some of the dumb things being done in pursuit of a theory of the universe. The proposal for
Needs professionals to look at it. Professional scientists hate this kind of stuff. I’m working on an
article titled “On Being a Crackpot.” I can tell you that professors don’t greet wild,
all-encompassing amateur theories with unbridled joy. The standard reaction is, “I’m not even
gonna look at your theory. I’ve dealt with lunatics like you before. Your theory is almost certainly
crap, and reading the theory and explaining why it’s wrong would be a waste of time because
nothing I could say would change your crazed mind. Why did the receptionist even let you into
my office?” My best bet is to have my brain transplanted into the body of an attractive young
woman and marry Brian Greene or Neil deGrasse Tyson or Michio Kaku. We’ll get married and
have lots of sex and then he’ll have to at least pretend to pay attention to my theory. Anyone
know an attractive young woman who wants to swap bodies with a 54-year-old man with hair
plugs? [Ed. Note: This scenario is reminiscent of the film Being John Malkovich]
Needs further integration – to have its elements combined into a smoothly functioning model of
the life cycles of thoughts, galaxies, and the entire mind and universe (preferably with cool
diagrams).
Needs to be shown to address shortcomings of currently accepted theories and explain things
currently accepted theories don’t. A theory which explains why the universe does what it does is
preferable to a theory which says, “There was a big explosion, then some cosmic inflation, and
now there’s some accelerated expansion.” Current thinking tends in the direction of, “Asking
‘Why?’ is naïve – a pinpoint that explodes with vast broken-symmetry energy just is,” but a nice
metaphysical/mathematical explanation that might also explain why some physical constants
are what they are could eventually be well-received.
Needs time and for Big Bang theory to continue to accumulate contravening evidence. Thomas
Kuhn, in his classic book about how science works, The Structure of Scientific Revolutions,
explains that science progresses through a kind of punctuated equilibrium – theories prevail
until they accumulate a bunch of anomalies, and then there’s a scientific revolution. Big Bang
theory has been the boss-man theory of the universe for only 50 years. And before that, we
didn’t really have a widely accepted theory of universal structure, because all the pieces weren’t
in place. The Hubble redshift and expanding universe equations of general relativity weren’t
discovered until the 1920s. We didn’t even know that the universe extended beyond the Milky
Way until Hubble provided incontrovertible evidence in the 1920s. So we’ve had this one theory
for not too long – basically our first and only theory based on decent information about the
universe. (There was Steady State theory, but it was never boss before getting swatted down by
observational evidence.) Big Bang’s getting a little creaky – needs a lot of add-ons and geegaws
to account for the results of observation.
122. Would you ever have theorized without your outlier background?
The background definitely helps. Can imagine many different destinies – resentful math teacher,
divorced unsuccessful novelist….But think those versions would do some theorizing, too. Maybe
not as much as this version. And they certainly wouldn’t have had this forum.
It’s an old question which has an element of what might now be called nerd-shaming. It implies
that regular people with common sense can get along in the world, while you, Nerd, with your
so-called intelligence, have a hard time with things such as sports or getting a girlfriend or not
dressing weird.
As a nerdy kid, I ran into this attitude fairly often, with people saying, “Well, you may be a
brainiac, but I’ve got common sense.” This reflects a lost world of nerds being somewhat
isolated from regular people. Today, tech forces us all to be nerds to some extent, all searching
for the new best practices for living.
I regret squandering time on some stupid stuff – all the Gilligan’s Island and I Love Lucy reruns I
watched as a kid, the crazy amount of time spent suing a quiz show. (My lawsuit was justified,
but it ate up a lot of time.) I regret not being more skeptical of medical procedures which turned
out to be unhelpful at best – varicose vein stripping, CT scan….I regret not being born a couple
decades further into the future. I regret not becoming wildly handsome in my 20s.
125. You live among an interesting cohort, no doubt. A group of individuals among the elite of
intellectual abilities. What of the ethics of forming elite organizations – “elite” by admission
standards? What about joining them? What about the possibility of some exploiting concomitant
assumed authority of an individual or group? Perhaps some of those in the ultra-high IQ
community make a conscientious choice – moral choice even – to not join such societies.
Insofar as the ethics of forming, joining, and sustaining elite groups, what of the possibility of
ultra-high general ability individuals choosing to not enter?
There are probably more hyper-intelligent people not in high-IQ societies than in them. Smart,
highly successful people tend to be more involved with the things that made them successful
than in exploring their mental skills.
126. You suffer from the attention and invective of internet trolls. Trolls come in many variety
within the flora and fauna of internet life. I hear they feed on a combination of foaming at the
mouth and others’ time – at least in their natural habitat. Unfortunately, they’re like starfish. If
one chops the poor little echinoderm to pieces – or like the story of the wizard from Fantasia
with the shredded broom, they have a “population explosion” and emerge with greater force and
invective than ever before. Do you have any responses for the harsh internet crowd? In other
words, what comes across with the highest frequency? How do you respond to them?
Arrogant – Well, I’m really good at IQ tests. Does that make me a snotty jerk? I hope not. Do I
know what’s best for people or have a plan for remaking society? No. Do I want to be the boss
of everybody? No. Do I think I’m really smart? Kinda, but my Twitter handle is
@DumbassGenius, not @geniusgenius, which shows at least a little modesty.
Weirdo – Yes, I’m kind of weird – not weird just to be weird, but weird because I’m used to
figuring out on my own how to do stuff, and often this figuring works out oddly. And even though
I do weird things like go to the gym five times a day, I also do normal, responsible things like
stay married for 23 years and be a dad and hold down jobs more successfully than most people
in my profession.
Loser – If you’ve read that I’m a high-IQ bouncer and stripper and nude model, that’s kind of
loserish. Very loserish. But I’ve also been a TV writer and sometimes-producer since the late
80s. I’ve written for more than 2,500 hours of broadcast television, including the Emmys,
ESPYs, American Music Awards, Grammys, and Jimmy Kimmel Live!, earning seven Writers
Guild Award nominations (one win) and an Emmy nomination. I’ve gotten a lot of material on
TV. As I’ve said before, I’m married and a dad, which is important. I’ve got a memoir that’s being
shopped around, and I have a theory of the universe. So, not entirely a loser.
Obvious hair plugs – Yes, you can tell that I have hair plugs. They’re not the worst plugs in the
world, but they could be better. I started getting them in 1989, before the technique had been
refined, so they’re a little clumpy. But they’re better than no hair, and if you didn’t know what you
were looking for, you might not notice them.
Why should you listen to me? – I’ve been trying to figure out how the universe works since I was
ten, and I’ve had a decent foundation for a theory for more than 30 years. I might be onto
You were very concerned about losing your virginity – Sex is kind of a given. Unmarried couples
live together without social censure, everyone’s saturated in porn and sexualized images,
everyone suspects the worst about everyone else in terms of sexual behavior. But as a
population, we’re just about fatter than ever, there are a zillion other things to besides sex, and
people in general don’t seem overly concerned with having sex, at least not as much as in the
70s.
127. Provisions for principles of existence would equate to the language of existence, and
therefore one can derive the more appropriate, direct, and proper phrase “principles of
existence” rather than “laws.” We have more derivations from defined principles of existence:
Principle One: universe operates within limits of complexity. Any further complexity will likely
deteriorate into optimal simplicity. Universe among logical possibilities of the set of universes
bound by optimal simplicity.
Principle Two: relevance/irrelevance, information of relevance will occupy or begin to occupy the
active center; conversely, information of irrelevance will not occupy or begin to not occupy the
active center.
Principle Three: The Persistence Project divides into The Statistical Argument for Universe and
The Statistical Argument for Consciousness. Universe cannot not exist; consciousness cannot
not exist. Therefore, the non-absolute high probability for existence, and persistence, of
universe and consciousness.
Principle Five: universe/mind symmetry, universe as mind based on net self-consistency and
information processing. Units of sufficient individuation in a universe with self-consistency and
information processing as minds too.
Principle Six: universe (Mn) implies armature (An); if armature, universe. Universe equates to
information processing; armature equates to material framework/processor: (An ⇒ Mn).
Principle Seven: armature and universe construct mind-space: (An + Mn = Sn).
Beyond the foundational elements of informational cosmology laid out in this interview, and the
first- and second-order derivations with informational ethics and other areas of discourse, what
further realms of investigation have a possible future of analysis within an informational
cosmological and informational ethical perspective?
One big field that will open up in during the rest of the century is what our drives should be, as
we develop the ability to modify our drives and desires.
By the end of the century, there will be much inquiry about how to merge minds and how
connected minds should be. There will be a whole new field addressing issues of mental
connectivity. In some communities, people will want to stay completely unmerged. In others,
people will try to achieve complete merging.
A critical field will be modeling AI and predicting its behavior. You need a mathematics of
consciousness to understand AI. Out-of-control AI could be the greatest threat in history. A
related field will be the design of artificial awareness.
There will be the field of informational structure – trying to figure out what the universe and other
such systems are doing with information by looking at the distribution and behavior of matter.
Can we get any idea of what’s in the mind of the universe?
Then there’s the cultural analysis of how we’ll be affected by thoroughly understanding
consciousness. Most people probably believe that consciousness is produced by the brain, but
the culture shock may not fully set in until consciousness is fully dismantled and replicated. How
people feel and behave when they’re no longer more divine than their devices will have to be
studied.
128. In the current climate of excess sensitivity tied to a reactionary institutional culture and
subsequent radical conformity – in irony, I do not wish to offend anyone; however, institutional
analysis does have value for us: internally, to Academia, various filters through achievement
measurements (BA/BAA/BBA/BSc, MA/MBA/MPA/MSc, JD, MD, PhD, Post-Doctorate, and so
on) and organizational-structural apparatuses operate for academic peers to consider standards
high and one another proficient in relevant material under research; externally, to independent
researchers and scholars, these can prevent innovation, hinder creativity, foster intellectual
docility and acquiescence, and exclude bright and qualified outsiders (even geniuses) – to claim
otherwise would consider academics of an angelic form. Both perspectives are valid and
compatible. It sounds good in an introductory course for particular ideals to have statement;
however, we must face facts in the following reflection. We must speak without prevarication.
You do not have academic awards, grants, honors, titles, or persuasive associations such as
authoritative academics/institutional connections. If correct, and if someone in mainstream
Academia stole these ideas, arguments, calculations, and original conceptualizations, you have
little recourse for intellectual copyright and plagiarism.
Your defence would hold little weight, especially with the possibility of defamation, character
assassination, and other tenth-rate tricks to discredit an individual rather than consider the claim
of plagiarism on truth or falsity of the claim. No internal colleague, principal investigator status
(or laboratory), faculty, external department, research institute, ethics board, administrative
authority, or university at large to likely remedy such a possibility. The Academy tends to work in
a closed way for accreditation and peer recommendations.
129. You live and work outside the university system. Any thoughts on such an outcome? You
developed this theory for over three decades. Any words for someone with intention of
surreptitious pilfering of even your crumbs? Those with a wolf heart, modicum of talent, but
I have one good defence – some of this stuff turning out to be true. If it’s true in a big way – if it’s
picked up and verified by the world, someone will put me in the story.
My wife and I go to couples counselling every three or four weeks, and we discussed this in our
last session – what happens if my book doesn’t get published, if I don’t get recognition, if 30
years from now I’m a frustrated old man whose ideas have become accepted but whose
authorship isn’t generally recognized. My wife and our therapist and I agreed that would suck.
And yeah, my credentials are: not-great stripper, epic catcher of fake IDs, legendary goer-back
to high school, nude art model, compulsive overachiever on IQ tests, and writer of jokes for
late-night TV. But there’s a story there. William Blake said, “The road of excess leads to the
palace of wisdom.” My excess hasn’t been that excessive, but it hasn’t been what everyone else
has done. Charles Darwin took a five-year trip on the Beagle. He saw eroded landscapes and
thousands of species. He thought about it for 20, 30 years. His exceptional life experience plus
extended thought lead to the greatest unifying theory in history – the earth’s geology plus the
vastness of organic variety equals deep time. I like to think that exceptional personal experience
plus extended thought can, even in the era of Big Science, lead to a great unifying theory.
I currently have sort of a PR person and next month will hire another PR person. My story will
get out there. Eventually, established scientists will consider it. Will someone be able to steal it?
At this point, my best chance for this not to happen is for me to keep talking and writing about it
in my goofy way.
Werner Couwenbergh
Abstract
The intuitionistic continuum has some very unusual properties that make it stand out from other
mathematical continua: it is inherently incomplete – “perpetually in the process of creation” –
and fundamentally indecomposable. In addition, every total function on the unit interval is
uniformly continuous.
These properties are a consequence of the characteristic way in which the intuitionistic
continuum is constructed. Ultimately this construction draws on the ‘two acts of intuitionism,’
defining mathematics as a “languageless activity of the mind”, originating “in the perception of a
move of time”: all mathematical objects are constructed based on an elementary ‘twoity’, given
by pure intuition. The requirement of constructability results in an intrinsic incompleteness of
infinite objects, which has far-reaching repercussions on intuitionistic logic and the nature of
intuitionistic mathematical objects.
1 Introduction
2 Situating Intuitionism
2.3 Consequences
3.1 Construction
3.2 Properties
3.3 Distinctiveness
4 Intuitionist Philosophy
4.1 Phenomenology
4.2 Ontology
4.3 Epistemology
5 Critique
6 Conclusions
References
Among the three main schools – logicism, formalism and intuitionism – that attempted to
provide an answer to the set-theoretic paradoxes that had caused a foundational crisis
in mathematics at the start of the 20th century, intuitionism arguably proposes
the most original solution. Conceiving mathematics as a languageless, mental
activity, based on the pure intuition of (inner) time, it produces a rich
mathematical universe that directly contradicts classical mathematics in key areas.
The construction and properties of the intuitionistic continuum are of particular
interest in this respect.
Following a brief introduction on intuitionism and its historical context, we will present
the basic tenets of intuitionistic mathematics – the ‘two acts of intuitionism’ –
and highlight some of the main consequences for both mathematics and logic.
[Formalists say that mathematics is a game-like manipulation of strings using manipulation rules
and that the body of propositions need not, ontologically, represent abstract objects. Logicism in
the philosophy of mathematics maintains that mathematics is reducible to logic; advocates of
logicism say that mathematics can be understood a priori, without intuition. -Ed. Note]
1 Cf. Brouwer, 1981;; Michel in van Atten, Boldini, Bourdeau and Heinzmann, eds., 2008, pp. 149-
162;; Heinzmann and Nabonnand in van Atten, Boldini, Bourdeau and Heinzmann, eds., 2008, pp.
163-177;; Bostock, 2009;; Dragalin, 2011;; van Atten, 2011;; Van Kerkhove, 2012a and 2012b;;
McKubre-Jordens, 2012;; Iemhoff, 2013.
2 “Over de grondslagen der wiskunde”, cf. Brouwer, 1907.
4 In contrast to constructivism (and classical mathematics), however, for Brouwer, logic depends
2
Noesis #206, September 2020 88
2.2 The two acts of intuitionism
Early on in his career, Brouwer developed philosophical views that could be labelled as
epistemological solipsism.5 His philosophy of mathematics, grounded in the ‘two acts of
intuitionism’, was developed over several decennia, but always remained in line with
these views:
First act of intuitionism (FAI):
Completely separating mathematics from mathematical language and hence from the
phenomena of language described by theoretical logic, recognizing that intuitionistic
mathematics is an essentially languageless activity of the mind having its origin in the
perception of a move of time. This perception of a move of time may be described as the
falling apart of a life moment into two distinct things, one of which gives way to the other,
but is retained by memory. If the twoity thus born is divested of all quality, it passes into
the empty form of the common substratum of all twoities. And it is this common
substratum, this empty form, which is the basic intuition of mathematics. 6
It is the “common substratum” of this shared intuition of (the move of) time that provides
the basis for the intersubjective validity of mathematics, and thus constitutes a
‘Husserlian’ escape from strict solipsism. Contrary to Kant, Brouwer only recognizes the
(ür-)intuition of (inner) time, and abandons the apriority of space.7
Second act of intuitionism (SAI):
Admitting two ways of creating new mathematical entities: firstly in the shape of more or
less freely proceeding infinite sequences8 of mathematical entities previously acquired …;;
secondly in the shape of mathematical species, i.e. properties supposable for
mathematical entities previously acquired, satisfying the condition that if they hold for a
certain mathematical entity, they also hold for all mathematical entities which have been
defined to be ‘equal’ to it ….9
The SAI thus defines the ways in which one can construct new mathematical objects from
existing ones – and ultimately from the basic quality-less twoity given by pure intuition,
that was introduced in the FAI.
5 Cf. Brouwer, 1905.
7 A number of reasons for why inner time provides a better model than space are listed in van
3
Noesis #206, September 2020
89
2.3 Consequences
The consequences of FAI and SAI are profound. Followed to their ultimate conclusions,
they require a reconstruction of both mathematics and logic.
Logic necessarily becomes time-dependent, as a statement can lack truth value at a
certain time tn, but can (or not) acquire it at a later time tn+m.10 This, in turn, implies that the
PEM – even though it will not necessarily lead to contradictions – is not universally
valid.11
The intuitionistic negation (A) is to be interpreted as the existence of a construction that
derives a contradiction from every possible proof of A (i.e. A := A ). Consequently,
the classical law of double negation elimination does not generally hold in intuitionism
either.12
Mathematical language arises ex post facto, as “an efficient, but never infallible or exact,
technique for memorizing mathematical constructions, and for communicating them to
others”13.14 As mathematical objects are mental constructions, based on pure intuition,
their truth cannot rely on correspondence with any external – platonic – reality, but solely
depends on the constructability of the objects themselves.
Construction of the natural numbers is based on FAI: the “falling apart of a life moment”
into two separate things, can (potentially) be repeated indefinitely, which implies the
constructability (in principle) of the smallest infinite ordinal ω.15 Constructing an
intuitionistic continuum – without invoking the PEM – is, however, less straightforward,16
and depends a.o. on the notion of choice sequences, introduced by Brouwer in SAI.17 The
10 E.g., the Poincaré conjecture.
12 But A A is an intuitionistic theorem (cf. e.g., Brouwer, 1981, p.11).
14 This also prevents language from becoming – in an ‘Hilbertian move’ – itself the object of study
in mathematics. Cf. Tieszen in van Atten, Boldini, Bourdeau and Heinzmann, 2008, p. 81.
15 Intuitionism does accept the principle of complete induction, but all infinities are to be regarded
that Brouwer developed his intuitionistic alternative (cf. Posy in Shapiro, 2005, p. 319).
17 Choice sequences were introduced by Brouwer only in 1918. Before that, he had considered the
continuum as a whole as a primitive notion, directly given by intuition (cf. Brouwer, 1907, pp. 9 and
4
Noesis #206, September 2020
90
resulting intuitionistic continuum is “perpetually in the process of creation: […] points of
the real line develop as ‘choice sequences’ and reasoning about them takes place on the
basis of the finite amount of information that is available to date”.18 The construction of
the intuitionistic continuum will be covered in detail in § 3.1.
The properties of the intuitionistic continuum strongly deviate from those of its (classical)
counterparts (cf. § 3.2 and § 3.3), giving rise to intuitionistic set theory, topology,
arithmetic and real analysis.
The two acts of intuitionism are firmly grounded in an intuitionistic philosophy of
mathematics. These philosophical foundations will be covered in § 4.
62), thus recognizing the existence of actual infinite sets (van Dalen, 2000, p. 4 & infra: § 3.1 and §
3.2.3.a).
18 Ewald, 1996, p. 1169.
5
Noesis #206, September 2020
91
3 The intuitionistic continuum
3.1 Construction
19 Brouwer, 1907, p. 9;; English translation from Brouwer, 1975, p. 17.
20 A comprehensive overview and analysis of Brouwer’s early views on (and struggles with) the
22 Cf. § 3.3.
6
Noesis #206, September 2020
92
generated need not be entirely free: restrictions for (further) choices can be added (freely)
at any point in the process, as long as the choice of the next component remains
decidable.
In intuitionism a real number is given by a real number-generator. This is an ips which is
a Cauchy sequence of rational numbers.24 The continuum of real number-generators can
be represented by the more general concept of spread.
b. Spreads
A spread M is defined by two laws:25
1. The spread-law ΛM: this is a rule Λ which divides the finite sequences of natural numbers
into admissible and inadmissible sequences, according to the following prescriptions:
i. It can be decided by Λ for every natural number k whether it is a one-member
admissible sequence or not;;
ii. Every admissible sequence a1, a2, …, an, an+1 is an immediate descendant of an
admissible sequence a1, a2, …, an;;
iii. If an admissible sequence a1, a2, …, an is given, Λ allows us to decide for every
natural number k whether a1, a2, …, an, k is an admissible sequence or not;;
iv. To any admissible sequence a1, a2, …, an at least one natural number k can be
found such that a1, a2, …, an, k is an admissible sequence.
The spread-law thus generates admissible ips’s of natural numbers. Graphically these
sequences can be represented as follows:
7
Noesis #206, September 2020
93
The continuum of real number-generators can now be defined as follows:
r1, r2, … designate an enumeration of the rational numbers
ΛM:
Every natural number forms an admissible one-member sequence
If a1, ..., an is an admissible sequence, then a1, ..., an, an+1 is an admissible
ΓM: to the sequence a1, ..., an (if admissible) is assigned the rational number ran .
To any real number-generator c a member m of M can be found so that c = m;; in this sense
the spread M represents the continuum of real number-generators.
c. Species26
Species are sets defined by a characteristic property of their elements. The following
definitions are given by Brouwer and Heyting:
1. A species is a property which mathematical entities can be supposed to possess.
2. After a species S has been defined, any mathematical entity which has been or might
have been defined before S and which satisfies the condition S, is a member of the
species S.
The property of coinciding with a given real number-generator is a species, which is
called a real number.27 The intuitionistic continuum is the species of all real numbers.
26 Excerpts taken from Heyting, 1956, pp. 37-38.
27 If x is a real number and if the number-generator ξ is one of its members, then ξ represents x or
coincides with x. Caution is required in defining a concept of ‘equality’ for incomplete objects.
28 Cf. e.g., Shapiro, 2005, pp. 323-325;; van Atten, Boldini, Bourdeau and Heinzmann, 2008, pp.
Berlin Lectures, and Borel mentioned it in a 1908 lecture (which Brouwer attended) – cf. van Atten,
Boldini, Bourdeau and Heinzmann, 2008, pp. 13 and 29.
8
Noesis #206, September 2020
94
idealized mathematician30 working on the solution to an as yet unsolved mathematical
problem (e.g., the Riemann hypothesis). At each point in time it can be determined
whether the creating subject has solved the problem at hand, or not (i.c. a proof or
refutation for the Riemann hypothesis). The outcomes of these subsequent checks can
now be used to define e.g., a real number: the nth digit being dependent on status of the
solution to the problem at stage k.
The degree of freedom of choice sequences can be restricted by limiting which elements
may be considered for each next choice.31 Brouwer also allows the introduction of new
restrictions after a certain number of choices (as long as the next choice remains
decidable). In a mature version of the SAI this is expressed as follows:
… infinitely proceeding sequences, whose terms are chosen more or less freely from
mathematical entities previously acquired;; in such a way that the freedom of choice existing
perhaps for the first element p1 may be subjected to a lasting restriction at some following pn,
and again and again to sharper lasting restrictions or even abolition at further subsequent pn’s,
while all these restricting interventions, as well as the choices of the pn’s themselves, may be
made to depend on possible future mathematical experiences of the creating subject…32
e. In summary
The FAI – through the “falling apart of a life-moment” – generates ordered pairs, and by
repeated iterations, the natural numbers. Subsequently, abstract manipulations allow the
construction of finite33 mathematics: the standard arithmetic operations, negative whole
numbers and the rational numbers (as pairs of integers).
In a similar way as in classical mathematics the continuum is built from infinite convergent
sequences. The SAI, stipulates that any legitimate infinite object must be given by a
principle or law, but (unlike the pre-intuitionists) Brouwer does not require the generating
laws to be entirely deterministic. The continuum for him was built up from choice
sequences, whose terms can be made dependent on future experiences of the creating
subject.
This potential for indeterminacy is the crux of intuitionistic mathematics.
30 Representing e.g., the whole of the mathematical community.
31 Cf. real number generators, where the Cauchy condition is imposed.
Choice sequences allowed Brouwer to transcend the (constructive) reduced continuum34
and to construct a full intuitionistic continuum. But the use of choice sequences
constitutes a strong deviation from classical (and constructive) mathematics, so that the
resulting intuitionistic continuum is a fundamentally different mathematical object than its
classical counterpart, and has some very distinctive properties.
Firstly, due to the non-validity of the PEM, one would expect many of the properties that
hold for the classical continuum to be not applicable – or at least strongly restricted – for
the intuitionistic continuum. However, as Brouwer shows in Die Struktur des
Kontinuums35, most of these properties can to a high degree be recovered by modifying
or re-interpreting their definitions in an intuitionistically relevant way36 (cf. § 3.2.1).
Secondly, choice sequences require the use of continuity theorems, which results in
properties that are in contradiction with classical mathematics (cf. § 3.2.2).
a. Discreteness
Definition:
A species is called discrete if for every two of its elements it is certain either that they are
equal or that they are different.
As it is possible to construct real numbers that are neither equal nor different from a given
number38, the intuitionistic continuum is – evidently – not discrete.
b. Ordering
Definition:
A species is said to be ordered if for every pair of elements (a, b) an ordering relation a < b
(equivalent to b > a) is defined in such a way that:
a = b is equivalent to the absurdity of both a < b and a > b;;
a < b and a > b are mutually exclusive;;
a ≠ b implies the existence of either a < b or a > b;;
34 In which all points are defied through lawlike Cauchy sequences of rational numbers.
35 Brouwer, 1930. § 3.2.1 follows the structure and arguments of this paper.
36 In general, “[…] existential statements are replaced by statements about the existence of
approximations with arbitrary precision” (Iemhoff, 2013, § 3.4).
37 Cf. Brouwer, 1930, pp. 58-59 for the quoted definitions in this paragraph.
38 Cf. e.g., Posy in Shapiro, 2005, p. 328, and in van Atten, Boldini, Bourdeau and Heinzmann,
eds., 2008, p.32. More generally, of course, the PEM does not hold in intuitionism.
11
Noesis #206, September 2020
96
a < c always follows from a < b and b < c;;
h < k always follows from a < b, a = h and b = k.
Just as it is possible to construct real numbers that are neither equal nor different from a
given number, it is possible to construct real numbers that are neither smaller nor greater
than a given number, and so the intuitionistic continuum is not ordered (and by extension
not well-ordered).39
Brouwer introduces the weaker properties of pseudo and virtual ordering, which do hold
for the intuitionistic continuum. Virtual order means the order relation (<) is not defined
over the whole of the continuum, but only on a subspecies of it (i.p. its elements: real
number-generators).40
39 Cf. Brouwer, 1930, p. 59 for a counterexample, and Heyting, 1956 pp.46 and 106 for the proof.
Fundamentally, this is due to the role of choice sequences in the construction of intuitionistic real
numbers, and their dependence on unsolved mathematical problems. Hence, “to order the full
continuum, one should have a method of solving all mathematical problems”. (Brouwer, 1930, p.
63). Cf. also Brouwer, 1981, p. 89.
40 Cf. Heyting, 1956, pp. 25-26 and pp. 105-107.
12
Noesis #206, September 2020
97
d. Separability in itself
Definition:
An ordered species S is […] separable in itself if one can indicate in the species a fundamental
sequence F such that between every two different elements of S there lies an element of F.
In order to preserve this property, Brouwer introduces the notion of sharp difference41.
The intuitionistic continuum is then separable in itself if:
… there exists in S a discrete and ordered fundamental sequence F such that between any
two sharply different elements of S there lies an element of F.
e. Connectedness
Definition:
An ordered species S is called connected if in each ordinal separation of S into two ordinally
separate subspecies and either contains a last and no first element, or contains a
first and no last element.
and are ordinally separate subspecies of S if every element of precedes every
element of .
Depending on whether the notion of division (separation) is specified in terms of
composition or splitting, the intuitionistic continuum is either not connected, or the
property of connectedness lacks meaning altogether.42
In order to recover connectedness as a property of the intuitionistic continuum Brouwer
introduces the exhaustive division:
The virtually ordered species S is […] exhaustively divided into the ordinally separated
subspecies and of which it is composed, if for any two sharply different elements a and b
(a < b) either all elements ≤ a belong to or all elements ≥ b belong to .
And then defines free connectedness as follows:
[A] virtually ordered species S [is] freely connected […], if for every exhaustive division of S
[…] into two ordinally separate subspecies and , there exists an element e of S such that
every element < e belongs to and every element > e belongs to .
With these definitions the intuitionistic continuum is freely connected.
f. Everywhere-density
Definition:
An ordered species is said to be everywhere-dense if between every two different elements a
and b of the species there […] exists an element c such that either a < c < b or a > c > b.
41 Cf. Kleene and Vesley, 1965, p. 163, for a detailed definition.
42 Cf. the theorem of the indecomposability of the continuum – cf. Heyting, 1956, p. 46, and §
3.2.2.a below.
13
Noesis #206, September 2020 98
Re-interpreting this definition in a similar way as was done for density in itself, restores
everywhere-density as a property of the intuitionistic continuum.
g. Compactness
Definition:
[…] for every indefinite sequence of closed intervals I1, I2, …, where each I+1 is a subspecies
of I, there exists an element common to all I.
Brouwer defines free compactness as:
The impossibility of the existence of a hollow nesting of intervals.
With hollow nesting defined as:
A nesting of intervals I1, I2, … [for which] for every element of the virtually ordered species in
question there exists a definite such that cannot belong to I
With these definitions, the intuitionistic continuum is freely compact.
43 Cf. van Atten and van Dalen, 2002a and 2002b.
44 Cf. van Atten and van Dalen, 2002b, (p. 7), and Iemhoff, 2013.
Based on WC-N, it can be shown that the quantified PEM: xx 0 xx 0
is false.49
As already mentioned, this has far reaching consequences for intuitionistic mathematics.
An immediate consequence is that on the intuitionistic continuum the law of trichotomy:
x x y x y x y is not true.50
a. Hierarchy
In his PhD thesis Brouwer presents the following alternative to Cantor’s hierarchy:
Thus we distinguish for sets the following cardinal numbers, in order of magnitude:
1. the different finite numbers.
2. the denumerably infinite.
3. the denumerably unfinished.
4. the continuous.51
45 Bar induction is a method to prove properties of choice sequences by inductively reducing them
47 van Atten, van Dalen, 2002b pp. 519-520 and also Heyting, 1956, p. 46 (Th. 2) for the proof.
49 Cf. van Atten and van Dalen, 2002b p. 520 and Iemhoff, 2013 for the proof.
15
Noesis #206, September 2020
100
The finite numbers and denumerable sets are intuitionistically legitimate objects since
they can – at least in principle – be constructed. Consequently, the resulting infinities are
only potential infinities, and the corresponding mathematical objects are intrinsically
incomplete.52
In this early stage of his thinking, Brouwer still considers the intuitionistic continuum to be
fundamentally non-constructible. So, being directly given by (ür-)intuition, the intuitive
continuum is accepted as a (primordial) mathematical object – even though it implies the
existence of an actual (completed) infinite set. In his mature intuitionism, with the
introduction of free choice sequences and the creating subject, Brouwer provides a
constructive basis for the continuum. This intuitionistic continuum is – as is clear from §
3.1 – an incomplete object, and thus avoids the “completed infinite” of the intuitive
continuum. It can be argued that through the use of free choice sequences and an
idealized mathematician, intuitionistic infinity ultimately still draws on receptive intuition.53
Cantorian higher infinities are – for evident reasons – not valid mathematical objects in
intuitionism.54
51 Brouwer, 1907, p. 62. English translation from Brouwer, 1975, p. 83.
53 Cf. Posy in van Atten, Boldini, Bourdeau and Heinzmann, 2008, pp. 35-36.
The considerable number and variety of conceptions of the continuum59 is the result of a
long evolutionary process in the development of mathematics. Amidst this diversity of
continua, the intuitionistic continuum occupies a special place. Free choice sequences
introduce a fundamental indeterminacy into intuitionistic mathematics, and the creating
subject gives it a subjective character. What results is a heterodox, often complex, but
also very rich mathematics.
The classical continuum, “defined as an infinite collection whose elements are
themselves infinite sets, each of whose elements in turn is an infinite sequence”,60 has an
‘atomistic’ structure: it is the infinite sum of its parts – individual points – which are pre-
given and static. These points are not connected in any way, and so the classical
continuum is ‘brittle’ it can be broken up into pieces.61 According to Brouwer it has “a
mere linguistic, and no mathematical, existence”.62 The intuitionistic continuum, on the
other hand, is given as a whole. It generates its constituent parts, which are overlapping
and unfinished. The resulting continuum is ‘syrupy’, innately indecomposable, and closely
resembles the intuitive ‘Aristotelian’ continuum. It even remains indecomposable after
removal of the rational numbers.63
Constructive analysis and intuitionism share their constructive principles regarding the
legitimacy of mathematical objects, but they do not share the same logic. Constructive
analysis64 accepts classical logic, including the PEM, and does not contradict classical
analysis. Rather it can be seen as a restriction of classical analysis – and i.p. the
constructive real line can be viewed as a restriction of its classical equivalent: it is
inherently decomposable.
59 Cf. e.g., Feferman, 2008, and Longo, 1999.
61 It should be noted that constructions have been proposed for a classical continuum without
points, detaching the notions of indecomposability and non-punctiformity. Cf. Hellman and
Shapiro, 2013,
62 Brouwer, 1981, p. 93.
63 Cf. § 3.2.2.b. This makes sense from a ‘dimensional’ point of view: “Classically one gets the
one-dimensional continuum as the sum of the two obvious zero-dimensional subsets, the rationals
and the irrationals. But intuitionistically the irrationals are themselves already one-dimensional”
(van Dalen, 1997, p. 1151). The intuitionistic reals and the intuitionistic continuum are of the same
genus, to paraphrase Poincarré.
64 Following Bishop’s Foundations of Constructive Analysis.
65 Bell, 2001.
66 In smooth infinitesimal analysis lines are composed of infinitesimal (one dimensional) segments
68 Cf. Weyl, 1994 (whose own views were closely related), and Bell, 2005.
18
Noesis #206, September 2020
103
4 Intuitionistic philosophy
Brouwer’s fundamental issue with the Cantorian set-theoretic construction of the
continuum and the transfinite was that it ultimately relied on arbitrary objects: “sets that
cannot be described and sequences that cannot be calculated”.70 In response, Brouwer
developed his intuitionistic alternative: a constructive theory of mathematics that
transcends the discrete and the finite, and encompasses the continuous and the infinite.
Philosophically, Brouwer’s intuitionism “rests upon a unique epistemology, a special
ontology, and an underlying picture of intuitive mathematical consciousness”:71 it is
fundamentally based on a phenomenological worldview.
4.1 Phenomenology
The phenomenological basis of Brouwer’s (mathematical) philosophy72 is clearly outlined
in Consciousness, Philosophy, and Mathematics.73 Consciousness and the mind are
spawned by (the sensation of) the primordial phenomenon of transition between stillness
and sensation:
This initial phenomenon is a move of time. By a move of time a present sensation gives way to
another present sensation in such a way that consciousness retains the former one as a past
sensation, and moreover, through this distinction between present and past, recedes from
both and from stillness, and becomes mind.
As mind it takes the function of a subject experiencing the present as well as the past
sensation as object. And by reiteration of this twoity phenomenon, the object can extend to a
world of sensations of motley plurality.74
Subsequently, “in a dawning atmosphere of forethought”, free will then creates
awareness of the causally ordered world:
[…] In the world of sensation experienced by mind, the free-will-phenomenon of causal
attention occurs. It performs identifications of different sensations and of different complexes
of sensations, and in this way […] creates iterative complexes of sensations. An iterative
complex of sensations, whose elements have an invariable order of succession in time, whilst
70 Posy in Shapiro, 2005, p. 322.
72 "[T]he philosophy […] of intuitionism [is] inseparable from [its] technical core: intuitionistic
mathematics […], and intuitionistic logic […]”, (Ibid. p. 318).
73 Brouwer, 1949.
In intuitionism, mathematical abstraction is done ab origine, i.e., as expounded in the FAI,
by dissociating the initial ‘twoity’ from all sensory content. Hence, there is neither reliance
on pre-existing empirical objects (i.p. to introduce the natural numbers), nor are
mathematical operations skeletons of empirical operations. “Mathematics is an
independent, empirically empty, process of its own”.77 This independence from the
empirical world is maintained in the SAI, given that the creating subject is interpreted as
truly idealized.78
Despite this apparent disconnect between the empirical and the mathematical world, both
remain intimately linked, as they share the same starting point (i.e. the ür-intuition of the
move of time), and have a parallel formative process: both activities are based on
sequences and the result of willful and creative acts. They bifurcate at the moment of
abstraction of the initial ‘twoity’:
[…] The falling apart of moments of life into qualitatively different parts, to be reunited only
while remaining separated by time [is] the fundamental phenomenon of the human intellect,
[…] by [abstraction] from its emotional content [it passes] into the fundamental phenomenon of
mathematical thinking, the intuition of the bare two-oneness.79
Hence, intuitionistic mathematics can be used to model the empirical world:80
The significance of mathematics with regard to scientific thinking mainly consists in this that a
group of observed causal sequences can often be manipulated more easily by extending its
of-quality-divested mathematical substratum to a hypothesis, i.e. a more comprehensive and
75 Brouwer, 1949, p. 1235.
76 Ibid.
78 Cf. § 4.1.3.
80 It is noteworthy that intuitionism – like the phenomenal world – isn’t fully determinate (as
exemplified by the refutation of the tertium non datur) and doesn’t contain actual infinities
(although intuitionism isn’t finitistic in the strict sense either).
20
Noesis #206, September 2020
105
more surveyable mathematical system. Causal sequences represented in abstraction in the
hypothesis, but so far neither observed nor found observable, often find their realization later
on.81
Even though Husserl and Brouwer were not directly influenced by each other,82 strong
parallels can be drawn between intuitionism and phenomenology. Van Atten83 argues that
Brouwer’s (later) intuitionism can be interpreted as a part of Husserl’s transcendental
idealism. To that end, he highlights four similarities:
Like phenomenology, intuitionism recognizes intuition as the legitimizing ground of all
knowledge, and recognizes a form of intellectual intuition (categorial intuition). […]
As in phenomenology, in intuitionism the fundamental notion of subject is not
psychological but transcendental.84 […]
Like phenomenology, intuitionism recognizes the fundamental role of time
awareness85 in our being aware of any object, and indeed in the bringing about of
intentionality itself. […]
Like phenomenology, intuitionism studies essential, structural properties of
consciousness, not those of any particular individual’s consciousness.86
Furthermore he suggests that “the fundamental ‘unfreedom’ [in Husserl’s transcendental
idealism] is that imposed by the basic structure of inner time consciousness”,87 and it
therefor “cannot provide a foundation for a pure mathematics that would go beyond
intuitionism”88. Hence the Husserlian concept of constitution of mathematical objects and
the intuitionistic notion of their construction would coincide. This is a strong claim, given
that for Husserl mathematical objects and truths are static, complete and allzeitlich,
whereas in intuitionism some mathematical objects – i.p. (free) choice sequences – are
by their very nature dynamic and incomplete.
81 Brouwer, 1949, p. 1237.
82 Husserl and Brouwer met in person at least once (in 1928 – cf. van Atten, 2007, p. 5).
84 “Husserl and Brouwer describe the transcendental ego eidetically, i.e., in terms of its essential
properties. […] Describing essential properties and describing an idealized [creating] subject here
amount to the same, as the idealization involved is that of abstracting from empirical limitations,
and essential properties are those that govern any instance, empirically possible or not.” (van
Atten, 2010, p. 21).
85 I.e. Inner, or internal, time.
87 van Atten, 2010, p. 84. Cf. also van Atten, 2007, pp. 95-101.
21
Noesis #206, September 2020
106
Tieszen89 mentions comparable correspondences between Brouwer’s thinking and
Husserl’s transcendental phenomenological idealism (i.p. regarding the role of
consciousness and its origin in the flow of internal time, but also e.g., regarding the
conception of the intuitive – non-punctiform – continuum and logical constants90). In his
view “the basic intuition of mathematics, in Brouwer’s sense, is a founded, formal intuition
in Husserl’s sense”, but not a “categorial intuition of unchanging, exact objects”.91
4.2 Ontology
Existence, for Brouwer, is tantamount to constructability: “what you build is what there
is”.92 This deviates from Husserl’s more agnostic position that έπoχή should be exercised
regarding the existence of the intentional objects of consciousness.
Intuitionistic objects (e.g., real numbers) need neither be complete, nor have determinate
properties (or even identity) in order to be legitimate – only a construction (and even that
only in principle) needs to be provided to guarantee their existence.
Reductio ad absurdum arguments on the other hand, can – as seen above – not be used
to provide valid intuitionistic proofs of existence.
Intuitionistic logic follows the constructive mathematical ontology, but the latter has
primacy (as said, this is a fundamental difference with classical mathematics, which does
not require an ontology, but builds on classical logic). However, according to “the
intuitionistic interpretation of mathematical statements, the intuitionistic ontology [is] a
consequence of the intuitionistic theory of meaning,93 not a premise for it”.94 The
intuitionistic ontology thus rests upon “a pervasive phenomenological base”.95 Bostock
even goes as far as to question the relevance of intuitionistic ontology altogether.96
89 In van Atten, Boldini, Bourdeau and Heinzmann, pp. 78-95.
90 For Husserl mathematical judgments can be either fulfilled, frustrated or neither.
91 Tieszen in van Atten, Boldini, Bourdeau and Heinzmann, p. 90. I.p. choice sequences would not
qualify as “ideal, objective, exact, mathematical objects” (ibid. p. 91). Van Atten gives arguments
as to why they would qualify (in van Atten, 2007, pp. 95-101).
92 Posy in Shapiro, 2005, p. 333.
According to the FAI, all mathematical knowledge is ultimately based on the primordial
intuition of the “perception of the move of time”, “divested of all quality” associated with it.
It thus precedes any sensory or empirical knowledge. As mathematical knowledge is also
a necessary basis for empirical science, it corresponds to synthetic a priori knowledge in
the Kantian sense.97
Taken together, the FAI and SAI specify that mathematics is a constructional “activity of
the mind”, and that the extension of mathematical knowledge implies an extension of that
activity:
Growth or development […] cannot proceed via the logical extrapolation of its contents (as
classical epistemology maintains), but […] only by its phenomenological or experiential
development – that is to say, its extension into further experience of the same epistemic
kind.98
Logical inference can have heuristic value, but it has no proof value, so it cannot lead to
new mathematical knowledge:
If the principles of classical logic were to be amended in such a way as to eliminate [the]
deficiencies of incompleteness and unsoundness, then one would have […] an accurate
device for determining which propositions are potential contents for intuitionistic proof-
experiences. However, such a device could still serve only to identify those propositions that
are capable of intuitionistic justification – which is a very different thing from (and epistemically
inferior to) actually supplying such justification.99
The development of mathematic knowledge, for the intuitionist, is therefor inherently
phenomenological, and it cannot be reduced to a mere “intellectual acceptance of a
proposition” without epistemic loss. For the same reason, language is also an “illegitimate
surrogate”, as “no symbolic notation can ever accurately report the content of a conscious
moment”.100
97 Cf. Posy in Shapiro, 2005, pp. 331-333, and supra: § 4.1.2.
101 Cf. e.g., the counterexamples in 1948A, 1948C, 1949A, 1949B, 1950A, 1950B, 1951, 1952C,
103 Cf. supra (§ 3.1.d), and Posy in Shapiro, 2005, pp. 344-345.
104 “The intuitionist thus finds himself in the unenviable position of depending upon the existence of
something – an undecidable proposition – that he cannot in fact construct, and whose possible
existence he thus may not assert!” (Posy in Shapiro, 2005, p. 345).
105 The ‘idealized mathematician’ has been linked to both the Husserlian and Kantian
transcendental subject (cf. van Atten, 2007, p. 164, note 245, and supra (§ 4.1.3)), but the exact
ontological status is debatable. One could argue that the creating subject necessarily exists
independent from our (combined) mental abilities.
Noesis
#206, September 2020 24
109
The initial abstraction that creates the “common substratum of all twoities”
involves two distinct steps:
1. Introduction of a boundary: the “falling apart of life moment” renders discrete
what was initially continuous,
2. Removal of content: “divesting all quality” reduces what was initially different
to a contentless identity.
The result is a fundamental ontological shift: what was a continuous measure of
differentiation or change has been transformed into a discrete measure of
duplication or repetition. I.p. the second step poses a problem, as it can be argued
that this is an idealization rather than an abstraction. Instead of being given by
pure intuition, the resulting twoity would then be more akin to a platonic idea.
Apart from these philosophical considerations, there are also some practical concerns
regarding intuitionistic mathematics. The introduction of free choice sequences and the
method of the creating subject unquestionably leads to a very rich mathematical universe
– as exemplified by the intuitionistic continuum. But at the same time it considerably
increases the technical complexity of mathematical practice, which becomes more
laborious106 and hence less palatable for the mainstream mathematical community.
Although intuitionism may not be the “quixotic curiosity”107 some claim it to be, it has – for
the abovementioned reasons – enjoyed a relatively moderate level of success
106 Cf. the recovery of classical properties – § 3.2.1.
25
Noesis #206, September 2020
110
6 Conclusions
Using choice sequences and the method of the creating subject, Brouwer is able
to construct an intuitionistic equivalent to the classical continuum. Most of the
properties of the latter can be recovered for intuitionism by simply revising or
re-interpreting the definitions. But being fundamentally incomplete (“perpetually in the
process of creation”), the intuitionistic continuum also displays some highly
idiosyncratic properties. Most notably:
These properties are reminiscent of an intuitive continuum and set the intuitionistic
continuum apart not only from its classical counterpart, but also from e.g., other
constructive continua, the nonstandard hyperreal line and the real line in smooth
infinitesimal analysis.
Bell, J. L., 2001, “The Continuum in Smooth Infinitesimal Analysis, Reuniting the Antipodes –
Constructive and Nonstandard Views of the Continuum”, Synthese Library, 306: 19-24.
Brouwer, L.E.J., 1905, Leven, kunst en mystiek, English translation by van Stigt in Notre Dame
Journal of Formal Logic, 37 (3): 381-429.
Brouwer, L.E.J., 1907, Over de Grondslagen der Wiskunde, Academisch Proefschrift, Maas & Van
Suchtelen, Amsterdam-Leipzig.
Brouwer, L.E.J., 1908, “De onbetrouwbaarheid der logische principes”, Tijdschrift voor
Wijsbegeerte, 2: 152-158. English translation in Brouwer, 1975, pp. 107-111.
Brouwer, L.E.J., 1930, Die Struktur des Kontinuums, Wien: Komitee zur Veranstaltung von
Gastvorträgen ausländischer Gelehrter der exakten Wissenschaften, in Brouwer, 1975, pp. 429-
440. English translation in Mancosu, 1998, pp. 54-63.
Brouwer, L.E.J., 1949, “Consciousness, philosophy and mathematics”, Proceedings of the 10th
International Congress of Philosophy, Amsterdam 1948, 3: 1235-1249.
Brouwer, L.E.J., 1981, Brouwer's Cambridge Lectures on Intuitionism, D. van Dalen (ed.),
Cambridge University Press, Cambridge.
Detlefsen, M., 1990, “Brouwerian Intuitionism”, Mind, New Series, 99 (396): 501-534.
Dummett, M., 2000, “Is Time a Continuum of Instants?”, Philosophy, 0 (4): 497-515.
Ewald, W., 1996, From Kant to Hilbert: A Source Book in the Foundations of Mathematics,
Volume 2, Clarendon Press, Oxford.
Gielen, W., de Swart, H., Veldman, W., 1981, “The Continuum Hypothesis in Intuitionism”, The
Journal of Symbolic Logic, 46 (1): 121-136.
Hellman, G., Shapiro, S., 2013, “The Classical Continuum without Points”, Review of Symbolic
12, 571.
Logic, 6(3): 488-5
Iemhoff, R., 2013, "Intuitionism in the Philosophy of Mathematics", The Stanford Encyclopedia of
Philosophy (Fall 2013 Edition), Edward N. Zalta (ed.),
URL=<http://plato.stanford.edu/archives/fall2013/entries/intuitionism>.
Longo, G., 1999, “The Mathematical Continuum: From Intuition to Logic”, in Naturalizing
Phenomenology: Issues in Contemporary Phenomenology and Cognitive Science (pp. 401-4 28), J .
Petitot et al., eds., Stanford University Press, Stanford.
Mancosu, P., ed., 1998, From Brouwer to Hilbert. The Debate on the Foundations of
Mathematics in the 1920s, Oxford University Press, Oxford.
Shapiro, S., ed., 2005, The Oxford Handbook of Philosophy of Mathematics and Logic, Oxford
University Press, Oxford.
van Atten, M., 2010, “Construction and Constitution in Mathematics”, The new yearbook for
phenomenology and phenomenological philosophy, 10: 43-9 0, (M. van Atten, personal
communication, March 2, 2014).
van Atten, M., 2011, "Luitzen Egbertus Jan Brouwer", The Stanford Encyclopedia of Philosophy
(Summer 2011 Edition), Edward N. Zalta (ed.),
URL=<http://plato.stanford.edu/archives/sum2011/entries/brouwer>.
van Atten, M., Boldini, P., Bourdeau, M. Heinzmann, G., eds., 2008, One Hundred Years of
Intuitionism (1907-2007), Birkhäuser, Basel.
van Atten, M., van Dalen, D., 2002a, “Arguments for the Continuity Principle”, The Bulletin of
Symbolic Logic, 8 (3): 329-347.
van Atten, M., van Dalen, D., 2002b, “Intuitionism”, in A Companion to Philosophical Logic, D.
Jaquette, ed., Blackwell, Oxford, pp. 513-530.
van Atten, M., van Dalen, D., Tieszen, R., 2002, “Brouwer and Weyl: The phenomenology and
mathematics of the intuitive continuum,” Philosophia Mathematica, 10: 203-236.
van Atten, M., 2007, Brouwer meets Husserl: On the Phenomenology of Choice Sequences,
Springer, Dordrecht.
van Atten, M., 2009, “Intuitionism as Phenomenology”, Unpublished manuscript, (M. van Atten,
personal communication, March 2, 2014).
van Dalen, D., 1997, “How connected is the intuitionistic continuum?”, Journal of Symbolic Logic,
62: 1147-1150.
van Dalen, D., 2000, “What is Mathematics? Intuitionistic Reflections”, Published in: Issues in
Contemporary Western Philosophy Islam-West Philosophical Dialogue. The Papers Presented at
the World Congress on Mulla Sadra (May, 1999, Tehran), 7: 175-190.
Van Kerkhove, B., 2012a, “Hedendaagse Filosofie van de Wiskunde in Historisch Perspectief”,
Cursustekst bij het vak Filosofie van de Wiskunde, Master in de Wijsbegeerte Logica en
Wetenschapsfilosofie, VUB.
Van Kerkhove, B., 2012b, “De Grondslagenstrijd”, Engelstalige cursustekst bij het vak Filosofie
van de Wiskunde, Master in de Wijsbegeerte Logica en Wetenschapsfilosofie, VUB.
Weyl, H., 1994, The Continuum: A Critical Examination of the Foundation of Analysis, trans. S.
Pollard and T. Bole, Dover Publications, New York, (English translation of Das Kontinuum,
Leipzig: Veit, 1918.)
Ron Yannone
My sincere hope is that the readers of this document will enjoy several of the problems posed
here and find some easy, amusing, challenging, and easy to “carry around” in your head as you
go about your daily activities. I hope, too, that you will share this document with parents of gifted
children who like math – be they middle schoolers, high schoolers or college students. My
desire is that you share this document with math teachers you know or tutors/mentors in math
teams, MATHCOUNTS program, American Mathematics Competitions (AMC), and the like.
Litton Industries was acquired by Northrop Grumman in 2001. Northrop is a highly successful
defense contractor giant with many innovations to its credit. The success of Northrop Grumman
in developing extremely complex systems is in line with the quality of excellence and innovation
Litton Industries had – and offered Northrop Grumman. Visit the Northrop Grumman home
website to read of the history and the specific legacy Litton Industries had prior to becoming part
of Northrop Grumman in 2001.
The last problem in Littons’ Problematical Recreations was #580 from March 29, 1971.
(1) How much money did Litton invest over the 12 years of hosting and championing
Problematical Recreations?
(2) Based on their closing advertisement on the last page of their 11 annual booklets, how many
engineers, mathematicians, scientists and computer programmers made inquiry with Litton and
ended up working for them?
(3) Did Litton ever proactively contact those readers who submitted multiple correct and
innovative answers to their weekly problems - if so, how many?
(4) How many man-hours were expended in all in the 12-year series by Angela Dunn and her
team of mathematicians?
This document covers Litton’s Problematical Recreations that were produced over a 12-year
period (1960 -1971). It all began for me by being introduced to them via a little annual booklet by
Mr. Otto Rittenbach, an electronics engineer who worked for the U.S. Army Camp Evans
location in Belmar, NJ. Read more later in this article.
I think the best overview is via the preface given by the editor Angela Dunn of Litton’s
Problematical Recreations from the Dover publication “Mathematical Bafflers” (1964; 1980
re-issued), with creative woodcut illustrations by Edward Kysar for each problem.
This book is an outgrowth of one of the most successful campaigns in the history of technical
publications, a weekly series called “Problematical Recreations,” which ran for twelve years in
Aviation Week [and Space Technology] magazine and the Electronic News, winning the top
readership award year after year.
The quality of their written response was the key to the series’ continuing appeal. Week after
week letters from engineers, mathematicians, scientists, and puzzle fans in general would offer
a more elegant solution, or an interesting mathematical sidelight to a problem from our series.
Often readers would challenge us for an explanation, and occasionally they would disagree,
sometimes vehemently, with our published solution. But always they exhibited original thinking.
It was the quantity of imaginative puzzle contribution that poured in from all over the United
States and from a dozen foreign countries that kept the campaign going at a high level of
interest for twelve years.
As director of “Problematical Recreations,” from 1962 until its cancellation in 1971, I was
fortunate in acquiring a staff of some of the best creative minds in mathematics to help check
and evaluate each original contribution. My chief consultant, the late David L. Silverman of the
University of California at Los Angeles, was truly a mathematical genius. His inexhaustible
knowledge, his infinite supply of ingenious original puzzles, and his ability to communicate any
principle or idea simply are responsible for both the series’ success and this volume. One of
David Silverman’s many admirers, Mr. George Koch, President of Guidance Industries
Corporation of San Francisco, commented: “He was the only mathematician I found in front of
whom I was comfortable admitting ignorance. He answered my ignorance with information, not
disdain, and thereby taught me a great deal.”
I relied heavily on Mr. Silverman’s expertise in handling the volume of correspondence. Each
letter was answered personally, after careful checking and research, a fact which so surprised
and pleased one reader in Washington, D.C., that he wrote me: “Thank you for not sending me
the ‘bed bug’ letter. You present Litton as a warm and human organization.” Because
“Problematical Recreations” may enhance your enjoyment of a puzzle, shed new mathematical
light, or simply amuse, selections have been included at the beginning of each of the seven
sections of this book. (Bed bug letter: a form letter, from a company to an individual who has
made a complaint, which promises to correct a situation, but is actually only intended to pacify
the person making the objection.)
When the puzzles were originally published, their sequence was chosen to provide interesting
variety from week to week. You will find, therefore, that the selections here run the gamut from
On the other end of the scale, advanced mathematics is involved in solving a variation of “The
Alpenstock” (first problem of Chapter 5), and an acquaintance with Number Theory is required
for the problems in Chapter 7.
In making this selection of more than 150 posers, we chose those that we hope combine the
unusual, the unexpected, and the non-obnoxious. You will find, therefore, that a majority of the
solutions may be reached by the application of a well-conceived hunch rather than by drudgery
and exhaustive checking of tables. For our object is, after all, to entertain.
The mathematical challenges that follow have been contributed by dozens of puzzlers
throughout this country, and from all over the world, most of them skilled mathematicians and
applied scientists. We share their pet brain twisters and original work with you in these pages.
For consistently submitting original and ingenious puzzles, the editor is indebted to: Mr. Leonard
A. Baljay of Cherry Hill, New Jersey; Mr. Walter Penney of Greenbelt, Maryland; Mr. Charles
Baker of Los Angeles, California; Mr. Noel A. Longmore of Kent, England; Mr. B. van Blaricum
of Melbourne, Australia; Mr. William Shooman of Orange, California; and Mr. J. N. A. Hawkins of
Pacific Palisades, California.
For their patient counseling and technical assistance in conducting the series, the editor is
grateful to Dr. Silverman and Dr. Harry Lass of the California Institute of Technology.
This book is for those who take pleasure in the process of reasoning, who enjoy exercising their
inventive faculties, who delight in the pursuit of an elusive proof. If the reader enjoys these
particular challenges, he or she is indebted to all the gentlemen named above and to all those
hardy fellows who took the time to write to “Problematical Recreations.”
Angela Dunn
Ron Yannone
Email: [email protected]
Many unending thanks to electronics engineer, Mr. Otto Rittenbach (father of my high school
friend Klaus). Otto worked with my dad and my friend David Anick’s father George (also an
electronics engineer) at the U.S. Army’s Camp Evans radar facility in Belmar, NJ. In his humble
living room in Neptune, NJ, Otto shared some of the Litton’s Problematical Recreations booklets
from a couple of the years they were published (between 1960 – 1971). I was a senior in high
school then. Otto had over 50 patents. My junior and senior years at Neptune High School are
my most memorable early years in developing and nurturing a strong love for math.
I have been thoroughly enjoying these problems at age 62 (retired) and sharing several with my
wife Jacqueline. I still need to “work” on many of the following problems shared here with you,
the reader. I take my time, look up formulas and principles I have forgotten and sometimes turn
the problem into a small research adventure–spanning several days as required. I work on
multiple problems in parallel so that when I hit a brick wall, I can switch to another problem with
maybe a fresh mind. I vividly recall sharing what I thought were the toughest problems (from the
few booklets I eventually bought) with David Anick (math genius in every sense) and most times
he solved them easily and could furnish supporting proofs as well.
Some of the problems below are from Angela Dunn’s book and some from a book by James F.
Hurley, professor at the University of California, titled Litton’s Problematical Recreations and
published by Van Nostrand Reinhold Company. Other problems are from the actual Litton
Industries annual booklets I have obtained over the past month (Books 3, 4, 5, 6, 7, 8, 9 and
11). Keep in mind I was in high school (over 45 years ago) when introduced to Litton’s
Problematical Recreations and although I had the math background I was not “sharp” in
ferreting out quickly and successfully the tricks these problems posed in many cases.
Oftentimes I “cried uncle” too quickly!!! Many of the readers of this document were once avid
puzzle aficionados, now with memorable technical careers and experiences behind them – they
can leverage their experiences and expertise in trying a few of these.
As you try specific problems, feel free to send me your answers [[email protected]]
and I will try to confirm or give suggestions if desired. In this set of exercises, I am certain you
will find some easy, some entertaining, some even very challenging and enlightening. Several of
these can be done with simple, clever thinking, versus the normal “school procedures” learned.
● A coffee pot with a circular bottom tapers uniformly to a circular top with radius half that
of the base. A mark halfway up the side says “2 cups.” Where should the “3 cups” mark
go? Can you determine the number of total cups the coffee pot holds? Can you
determine the number of cups the full cone involved (the pot being a subset) holds?
What percent difference in height is the 3-cup level from the full pot level? [H(191),
B8(#28)]
● A castle and a bishop are placed at random on different squares of a chessboard. What
is the probability that one piece threatens the other? [D(143), B4(#13)]
● What is the base of the positional numeration system in which 12102 + 1 = 12220?
[B9(#10)]
● Two hot rodders compete in a drag race. Each accelerates at a uniform rate from a
standing start. Al covers the last quarter of the distance in 3 seconds; Bob covers the
last third in 4 seconds. Who won, and by how much? (Can you conjure up a clear
numerical example where the cars travel a distance d and meet the ending requirements
stated above?) [D(20)]
● Lazy Levy wishes to toss a snowball over a building 144 feet by 144 feet and 133 feet
high with the least expenditure of energy. How far away from the building should he
stand? (Can you find a solution where mental arithmetic might suffice? Can you
determine the launch velocity and launch angle and the time for the snowball to reach
the apex of its trajectory and the time for it to clear the roof-top?) [B9(#16)]
● A hula hoop of circumference 40 inches performs one revolution about a girl with a
20-inch waist. How far has the original point of contact of the hoop traveled? [B9(#43)]
● A hostess plans to serve a square cake with icing on top and sides. Upon determining
how many guests want cake, what method should she use to ensure that each guest will
receive the same amount of cake and icing? (Can you determine the angles between
each cut for each number of pieces desired – especially the odd number of cuts? What
do you get for the angles between the pieces for 3 cuts, 5 cuts?). [H(194), B8(#39)]
● A contractor estimated that one of his two bricklayers would take 9 hours to build a
certain wall and the other 10 hours. However, he knew from experience that when they
worked together, 10 fewer bricks got laid per hour. Since he was in a hurry, he put both
men on the job and found it took exactly 5 hours to build the wall. How many bricks did it
contain? [D(34), B6(#34)]
● What is the cube root of INVENTORY? [B7(#25)]
● Without using any symbols, arrange the digits 1, 3, 5, 7, 9 to equal the digits 2, 4, 6, 8.
[D(93), B4(#27)]
● If the hour and minute hands of a watch are interchanged, how many different possible
times could the watch show? [D(167), B4(#30)]
● Smith said to Jones, “I just bought four mujibs at $21.78 apiece, and I noted a curious
thing. “The total was $87.12, the price of a mujib in reverse order.” “Isn’t that a
coincidence,” said Jones. “The other day I bought some glinches (no, not one or four)
and I remarked the same thing.” How much does a glinch cost and how many did Jones
buy? [D(85)]
● Two men are walking toward each other alongside a railway. A freight train overtakes
one of them in 20 seconds and exactly 10 minutes later meets the other man coming in
the opposite direction. The train passes this man in 18 seconds. How long after the
train has passed the second man will the two men meet? (Constant speeds are to be
assumed throughout.) [D(19), B4(#16)]
● Four boys, Alan, Brian, Charles and Donald, and four girls, Eve, Fay, Gwen and Helen
are in love with one of the others, and, sad to say, in no case is their love requited. Alan
loves the girl who loves the man who loves Eve. Fay is loved by the man who is loved
by the girl loved by Brian. Charles loves the girl who loves Donald. If Brian is not loved
by Gwen, and the boy who is loved by Helen does not love Gwen, who loves Alan?
[B4(#6)]
Noesis #206, September 2020
118
● What operation can be performed three successive times on a solid cube, so that at
each stage, the surface area is reduced in the same proportion as the volume? [H(46)]
● An icicle forming from a dripping gutter is in the shape of a cone five times as long as it
is wide (at the top). A few hours later it has doubled in length and the generating angle
has also doubled. How does its present weight compare with its previous weight?
[H(193)]
● If X + Y + Z = 1, prove XY + YZ + XZ < ½ [D(9)]
● A new kind of atom smasher is to be composed of two tangents and a circular arc which
is concave toward the point of intersection of the two tangents. Each tangent and the
arc of the circle is 1 mile long. What is the radius of the circle? [H(168), D(58), B8(#38)]
● Johann Jungfrau, the famous mountain climber, was traveling through the Trondheim
timber country one day. Quite by accident he dropped his trusty alpenstock, an
unusually straight stick, near the buzzsaws where, in two shakes of a yak’s tail, it was
neatly cut into three pieces. What is the probability that these three pieces can be
placed together to form a triangle? [D(137), B5(#3)]
● An astute mathematician drives 21 miles round trip to work each day. On the way he
passes a gas station which advertises free gas if the price at which the pump stops when
filling the tank consists of repetitive digits, i.e., $1.11, $2.22, $3.33, . . . , $9.99. Gas
costs 30 cents per gallon and our mathematician knows his car delivers exactly 15 miles
per gallon. Considering no additional driving, he computes that once he fills his gas tank
at the station he can get all his gas free. The station is an integral number of miles from
his home. Where is it with respect to his home? [B4(#20)]
● Find a two-digit number which is a factor of the sum of the cubes of its digits, while the
reverse of the number is a factor of the sum of the fourth powers of the digits. [H(144),
D(206)]
● There are nine cities which are served by two competing airlines. One or the other
airline (but not both) has a flight between every pair of cities. What is the minimum
number of possible triangular flights (i.e., trips from A to B to C and back to A on the
same airline)? [D(31)]
● Express as the product of sixth- and ninth-degree polynomials with integral coefficients.
[D(30), B6(#13)]
● Archimedes O’Toole, a mathematical poet, on seeing this equation, translated it into a
limerick. Can you duplicate this feat? [H(43)]
● Six men decide to play Russian roulette with a six gun loaded with one cartridge. They
draw for position, and afterwards, the sixth man casually suggests that instead of letting
the chamber rotate in sequence, each man spin the chamber before shooting. How
would this improve his chances? [H(111)]
● A mathematician whose clock has stopped wound it, but did not bother to set it correctly.
Then he walked from his home to the home of a friend for an evening of hi-fi music.
Afterwards, he walked back to his own home and set his clock exactly. How could he do
this without knowing the time his trip took? [H(133)]
● Mr. Field, a speeder, travels on a busy highway having the same rate of traffic flow in
each direction. Except for Mr. Field, the traffic is moving at the legal speed limit. Mr.
Field passes one car for every nine which he meets from the opposite direction. By what
percentage is he exceeding the speed limit? [H(151)]
● If a coin were randomly shaken out of a certain piggy bank, its expected value would be
15 cents. If a dime had been added, the expected value would have been only 14 cents.
What are the contents of the bank? [B8(#18)]
● A guidance technician celebrating a successful moon shot tipped his half full brandy
glass slowly to an angle of 45 degrees from the vertical. If the glass was spherical
inside, 3 inches in diameter, with a 2-inch diameter hole in its top, what percent of his
drink did he lose? [B8(#33)]
● In a little known work, the famous geometer of Skalenos proves the following theorem:
“The square of the side opposite the Fandangle is equal to the sum of the squares of the
other two sides added to the product of those two sides multiplied by the square root of
two.” What is a Fandangle? [B8(#43)]
● A wall is made of bricks which are twice as long as they are high. The wall is 13 courses
high, with 100 bricks on the odd courses and 99 bricks plus two half bricks on the even
courses. An ant starts at the lower left corner and walks in a straight line to the upper
right corner. Over how many bricks does he walk? [B8(#34)]
Mathematical Bafflers edited by Angela Dunn, Dover publications; ISBN 0-486-23961-6. Paperback, 217
pages. (1980 with corrections from the 1964 edition and foreword by Angela Dunn). McGraw-Hill Book
Company. Available on AMAZON for $10.95. I did not stumble across this book until around 1996 – and
the title distracted me because it did not spell out Litton’s “Problematical Recreations” – but when I saw
the woodcut illustrations by Ed Kysar – and some familiar problems, I was nostalgically elated! You can
peek inside t he book online at AMAZON – try not to cheat though!
On the back cover of her book, we read (in part) the following:
Mathematical Bafflers gathers the prime problems from 12 years of the esteemed weekly Problematical
Recreations which appeared in Aviation Week [and Space Technology] and Electronic News – periodicals
read by mathematicians, engineers, scientists, computer programmers, and over the years, by serious
puzzlists who heard about the special section. To keep the quality at a peak, Angela Dunn and a team of
mathematicians invented their own puzzles and gleaned the best submissions from an enormous reader
response. Criteria were conceptual originality, ingenuity of approach, elegance of solution, with
preference given to the kind of puzzle more vulnerable to a flash of inspiration than mere persistence.
Categories include algebra, geometry, diophantine equations, logic and deduction, probability, insight and
number theory.
The woodcut illustrations (by very gifted Ed Kysar) really add to the charm (as used in the 11 annual
booklets they published). Also, creative titles at the top of each problem greatly enhances the lure of the
problems! The book contains more than 150 problems. Note: James F. Hurley’s book Litton’s
Problematical Recreations does not contain these cute problem titles.
Litton’s Problematical Recreations b y James F. Hurley, 1971, Van Nostrand Reinhold Company.
AMAZON has Jim’s book for under $10.00 hardback. Jim breaks his selection of the over 600 problems in
the 12-year series into 8 chapters by topics. I recall stumbling across this book when I was with General
Electric Company Aerospace and Electronic Systems Department in Utica, NY in 1977 browsing the math
books in the town library. I recall my thrill and excitement when I saw these collected into a single book! I
had to get it!!! I loved that Jim gave progressively harder problems by math topic. But the earlier chapters
are as tough or tougher because they require that flash of insight and clever inspiration. Over the years I
lost the book but in March 2013 Jim sent me an autographed copy. I hadn’t yet seen Angela Dunn’s book
until later when I came to Nashua, NH in 1995. I got Angela’s book at Barnes and Noble.
A photo of some of the annual booklets published by Litton Industries follows later. On the back page of
each of these booklets was Litton’s solicitation for engineers, mathematicians, and scientists to contact
Litton for challenging careers. Some of the advertisements presented follow.
“Seeking new solutions to current technological problems is but a part of our activity at Litton. We
continually pose questions concerning the future state-of-the-art and pursue the answers that will be
needed tomorrow.
To do so, we need inquisitive engineers, mathematicians and scientists with the ability to anticipate and
predict. We invite such independent thinkers to consider a career with us.”
Booklet 3 (1963)
“To whom shall the world look for the enlargement of its knowledge? Who shall venture into untrodden
regions, follow up the faint discoveries of earlier times, and resolve a thousand difficulties that baffle
human ingenuity? You must look to the intellectual adventurers who are not afraid to go out of the
common track of thought.” -E. T. Channing (from a speech delivered at Harvard College in 1818)
We look to the engineer, the scientist, the investigative mind to search beneath the surface, to detect new
materials and new methods, to create new concepts.
Our fields of endeavor are: Defense Equipment and Systems, Business Machines, Communications
Systems, Components, Geophysical Research and Instrumentation.
Booklet 4 (1963)
“. . . it is the man, not the method, that solves the problem.” -Heinrich Maschke (on Present Problems of
Algebra and Analysis: Congress of Arts and Science, V ol. I, 1905)
“Our man is the independent engineer who, not content to browse along beaten paths, looks for new
methods to yield more elegant solutions. He is our innovator, our growth, our future.
If you are such a man, your future can grow with Litton. The long-term potential for our products is
responsible for the expansion of our Plants, Laboratories, and Offices in the United States and throughout
the Free World. It is this continual advancement that is creating careers for the original engineer.”
Booklet 5 (1963)
Front Page: The forty-one problems that follow were, for the most part, thoughtfully contributed by our
readers. We pass them along for your mathematical entertainment.
Some require simple reasoning while others might challenge a professional mathematician. In all the
emphasis has been on conciseness of statement, elegance of solution, and imaginative appeal.
When you have arrived at an answer, check with ours in the back of the booklet. May we hope they
agree.
Back page: New methods of solution, new approaches, [and] new answers are being sought and
encouraged at Litton in every area of our activity. At the prevailing rate of change in technology, “keeping
abreast” is not sufficient. It is vital to anticipate, to foresee, to predict.
We look to inquisitive minds for this vigorous expansion of man’s knowledge. We invite imaginative
engineers, scientists, mathematicians to investigate a career with us.
Our Plants, Laboratories, and Offices in the United States and throughout the Free World continue to
grow, creating new positions in our fields: Electronic Systems; Electronic Components; Business
Machines, Equipment and Supplies; Commercial Electronic Equipment and Services; Nuclear-Powered
Submarines, Surface Vessels.
Booklet 6 (1964)
In our search for new answers, new concepts to advance our technology, we find the uncommon
denominator, the untried direction, the original approach most often leads to innovations. We at Litton are
known for being self-starters with the courage to get off the beaten track.
Engineers, mathematicians and scientists looking for mental elbowroom and longing for the freedom to
forge new trails are invited to investigate a career with us.
We have Plants, Laboratories, and Offices throughout the United States and the Free World. Our fields of
endeavor are wide and varied. Generally: Electronic Systems; Electronic Components; Business
Machines, Equipment and Supplies; Commercial Electronic Equipment and Services; Nuclear-Powered
Submarines, Surface Vessels.
The strange mathematics above can be readily proven. In our efforts to avoid the cumbersome and
maintain conciseness, we combined the first five books of this series into one handy edition, The Best of
Problematical Recreations – Volume 1, now replacing the out-of-print, individual booklets one through
five. Our sixth and seventh booklets are single compilations of 40 and 43 problems, respectively,
representing our continuing booklet series. All three editions (143 problems!) are available upon request
by dropping a card to: Problematical Recreations, Litton Industries, Beverley Hills, California.
Back Page: Proof of the unique solutions being found at Litton is our continual development of new
methods, new materials, new procedures. The advanced equipment that attends is answering the need
for greater accuracy, increased efficiency and more reliability in electronic components and systems.
We consider our recognition of the individual contribution to be the foundation of our accomplishments.
We therefore encourage original and inventive engineers, mathematicians, and scientists to join us.
Booklet 8 (1966)
Back Page: We hope you have enjoyed this set of mathematical challenges designed to delight your
inventive faculties. We at Litton delight in the application of mathematical and scientific disciplines to the
practical problems facing us in our fast-moving, technological fields of endeavor.
If you are an engineer, mathematician, or scientist with the ability to translate your skills into advanced
electronic components and systems, you are invited to apply for a Litton career.
Booklet 9 (1967)
[1st page] – Our series is raised to the ninth power with this new edition of mathematical challenges
largely contributed by our readers. We thank them for their consistently novel offerings and welcome all
readers to follow suit and extend, what we hope will be, an infinite sequence. [back page advertisement] –
Our Plants, Laboratories, and Facilities throughout the United States and the Free World are continually
expanding and creating new positions. We at Litton consider good problem-solvers to be the foundation of
our technological accomplishments. If you are an engineer, mathematician or scientist and would like to
apply your ingenuity to advanced electronic components and systems, we suggest you consider a Litton
career.
Booklet 11 (1970)
If you’ve found this collection of rigorous mental gymnastics stimulating and entertaining, we suggest you
take your talents to LIEPS (Litton Industries Extended Placement System). Your qualifications can then
be known to all our Litton divisions in the United States and throughout the Free World. You can be
placed in the area of our endeavors best suited to your capabilities: Business Systems and Equipment;
Professional Services and Equipment; Industrial Systems and Equipment: Defense and Marine Systems.
Engineers, mathematicians, and scientists are invited to address an inquiry to: LIEPS (Litton Industries
Extended Placement System), 360 North Crescent Drive, Beverley Hills, California 90213
Ken Shea
"Technique is the boundary of democracy. What technique wins, democracy loses." -Jacques Ellul
As few as three or four millennia ago, mankind, the uncontrollable and occasionally tumultuous
forces of nature, and society were largely considered one. The implications from this
undifferentiated apprehension were profound, but mostly took the form of mythopoeia and
believing that mankind's fortunes waxed and waned due to awesome, uncontrollable forces.
Propitiations to the gods could be made, of course, but the idea that the course of society could
be significantly and directly altered by mankind in some fashion was considered dangerously
exotic or outright incomprehensible to most people. Before Athenian democracy and the Greek
philosophers, particularly the fifth-century BCE Sophists (e.g., Antiphon) and Plato (b. 428
BCE), nature and politics were essentially inextricable; ontologically, politics as such arguably
didn't exist yet as an independent mode of thought or sophisticated field of inquiry. Eventually
both political philosophy and politics became differentiated from the earlier skein of mankind,
nature, and society and reciprocal exchanges were possible between political philosophy and
politics. Reifying these concepts, Plato hoped the artistry of the statesman would overwhelm the
insincere promises of the mere politician. Considering Plato’s foundational influence, it wouldn't
be much of an exaggeration to apply British philosopher Alfred North Whitehead's famous
comments on European philosophy in general ("a series of footnotes to Plato") to political
philosophy in particular, such was Plato's reverberating impact and ingenuity in conceptualizing
politics as an entire system of interlocking roles, procedures, institutions, and assumptions (cf.
Plato's Republic, Plato’s Laws) . In fact, the Roman statesman Cicero includes the following
rhetorical question in his first-century BCE Socratic dialogue, De re publica: "What better
authority can we cite than Plato?”
But what are political philosophy and politics all about? Political philosophy seeks to clarify the
background assumptions upon which politics is predicated. Therefore, political philosophy is
concerned with the following kinds of ostensibly abstract issues: rights, justice, liberty, duties,
law, and political legitimacy. The distinction between political philosophy and politics is
analogous to that between philosophy and science more generally insofar as philosophers will
inspect the background assumptions of science (e.g., naïve realism) while science - viz.,
political science or empirical science proper - will be concerned more narrowly with the facts on
the ground and the rules of engagement with those facts. What exactly are these political facts
on the ground and how do they define, and get defined by, the issues political philosophers
perennially grapple with? Politics, at bottom, may be a way of exchanging power for the promise
of security to better navigate constants in human affairs, such as change, resource scarcity, and
the fact that groups in society seek competitive advantage within and between themselves.
James Madison would much later say that, "if men were angels, no government would be
necessary"; this is only half-true because of resource scarcity, partial rights, and concentration
of wealth. The role of private power, on the whole, has been criminally underappreciated qua a
facilitator or check on freedom, which has at least two aspects, viz., freedom from government
coercion, i.e., the right-wing libertarian conceptualization of liberty, and, second, the actual
Put another way, politics is fundamentally about power, how it is attained, justified, wielded, and
to whom the fruits of that exercise of power flow. The word politics can be historically traced to
Aristotle's Politics, an eight-part text, which some historians consider to be a companion piece to
Nicomachean Ethics, such was Aristotle's earnest striving towards virtuous, legitimate, general
welfare-promoting rule; please excuse Aristotle's qualified defenses of slavery, monarchy, and
aristocracy. Etymologically, politics itself means something like the "affairs of the cities"; polis
means city in Greek. The conflation of politics and ethics - an understandably head-scratching
combination to modern minds - came naturally for the ancient Greeks since they made fewer
distinctions between personal and social modes, which could help explain why Aristotle's
Politics and Nicomachean Ethics would be linked. Aristotle essentially concludes in Book IV of
Politics that a polity, or constitutional form of government, is potentially ideal because, all things
being equal, such would consistently tend to promote widespread personal fulfillment and
champion the common interest by harmonizing class concerns; Aristotle considered democracy
a "defective" and "perverted" politics compared to polity because the former, democracy, was
prone to eventual concentration of power, demagoguery (agogos means "leading" in Greek),
and mob rule. The mob can be more charitably called the demos, which historically meant the
common people of ancient Greece; today, the term demos is used to refer to the electorate in a
democracy. Etymologically, the word democracy itself means power (Greek kratia for -cracy,
meaning power or rule) of the people (Greek demos, meaning the people).
The phrase de re publica is Latin and treated synonymously with res publica, which might
literally translate to a "public matter." Today, res publica is used coterminously with the word
commonwealth. Already, the reader will appreciate how far the fifth-century BCE Sophists and
Plato have shifted the landscape and, in critical respects, created the landscape. In the dialogue
Laws, Plato, who rarely missed an opportunity to blend mathematics, philosophy, and inchoate
civics, homed in on the number 5,040 for the ideal number of citizens composing a polis partly
because 5,040 is a superior highly composite number boasting a staggering 60 divisors (5,040
is the sum of 42 consecutive prime numbers, seven factorial, and the product of 10 times 9
times 8 times 7). Plato’s rationale seemed to be that a highly composite number would be
advantageous for divvying up positions and roles in a society. The trouble would come as the de
facto interregnum of the Hellenistic Age - historians trace this period from the death of
Macedonian king Alexander the Great in 323 BCE to the death of the Roman Republic and birth
of the Roman Empire right after the Battle of Actium in 31 BCE - marked a transition from the
sheltered reality of the city (polis) to the more unbounded, expansionist nature of empire. The
Seleucid Empire, founded by and ruled dynastically starting with Seleucus I Nicator, existed
throughout much of the Hellenistic Age and accumulated significant chunks of Alexander the
Great’s once-grand Macedonian Empire post-323 BCE before succumbing to Roman general
Pompey the Great, a pivotal figure in the Roman Republic’s transition to Roman Empire.
Meanwhile, the Cynics, Stoics, and Epicureans sought to enlarge the depth of field from the
traditional obligations of citizenship and questioned the basic values inhering within the polis.
Sheldon Wolin lyrically writes in Politics and Vision that, “The strong elements of despair and
withdrawal that colored Cynicism and Epicureanism were nourished by an anti-political impulse
which could not be concealed by their temporizing and grudging acknowledgment of some utility
in a political order.” Epicurus went as far as to suggest, “we must free ourselves from the prison
With sprawling areas increasingly at stake, one way to bind people together is through myth
tethered to hierarchy. As the attainment and wielding of power usually requires or, at least,
benefits from some kind of myth or attempt at political legitimation in an ordered society,
emperors and monarchs throughout history have predictably played the divinity card. The
Mandate of Heaven that putatively informed various Chinese dynasties and the Imperial cult of
ancient Rome were early attempts at shoring up political legitimacy under celestial pretexts.
One popular, and seemingly incontestable, way of purporting to have political legitimacy through
the ages has been to say that some sort of celestial being has sanctioned one extraordinary
individual, or special group, having power and implicitly nodded at the subjugation of a less
fortunate group composed of lesser beings - the divine right of kings comes to mind. Church
and state, for instance, significantly blurred when French and English monarchs provided the
royal (a.k.a., King’s) touch in the sacramental ritual of laying on of hands to supposedly cure
scrofula (i.e., mycobacterial cervical lymphadenitis, an abnormality of the lymph nodes
associated with tuberculosis), or the King’s evil, as scrofula was apparently misapprehended at
the time. Shakespeare's Macbeth even references Edward the Confessor's legendary touch ("A
most miraculous work in this good king," gushes Malcolm in Act IV, Scene III). Monarchs would
hope to cement dynasties or merely heighten the legitimacy of their reign by miraculously curing
diseases, which often naturally went into remission. Clearly, this entire situation left a lot to be
desired in terms of scientific acumen, but the religious practice of laying on of hands did
outwardly broaden political legitimacy for the monarch across social classes. If nothing else, the
religious practice of laying on of hands highlights the centuries-long function that the Roman
Catholic Church served as a quasi-state granting, bolstering, or withdrawing claims of political
legitimacy to European monarchs in the Middle Ages and Early Modern Period.
Over time, both scientific understanding and colonial ambitions grew. The European colonizers,
thus, seized on a self-serving explanation for colonialism that they felt sufficiently justified
exploiting poorer peoples in Asia and Africa: exposure to European culture does a world of good
for an occupied country's development, backwards manners, and spiritual health. Or so the
stated rationale went. Domestically, a different story was being told by the Enlightenment-era
political philosopher John Locke, who would impress Thomas Jefferson with a concept known
as the consent of the governed. The social contract theorist John Locke maintained that
governments gained moral and political legitimacy through shoring up consent from those
governed, which consequently justified the use of state power. Thomas Jefferson would go one
step further than invoking the consent of the governed in the preamble to the U.S. Declaration of
Independence: Thomas Jefferson grandly asserted that “unalienable” rights and “self-evident”
truths were “endowed by their creator.” The second sentence of the preamble, confusingly,
Bearing these later developments in mind, Founding Father and Federalist Alexander Hamilton
was idealistic when he declared, "The fabric of American empire ought to rest on the solid basis
of the consent of the people. The streams of national power ought to flow immediately from the
pure original fountain of all legitimate authority," at a time when the U.S. Electoral College
tempered the passions of the hoi polloi and state legislatures elected members to the U.S.
Senate, which was once actually known as the “cooling saucer.” The latter U.S. Senate election
process would cause much chaos, bribery, and violence throughout the 19th century until the
17th Amendment was ratified and direct election of senators came to pass in 1913. Further, this
concept of the consent of the governed remains difficult to square with the practice of
gerrymandering, multiplying federal agencies peopled with unelected bureaucrats, and the fact
that six of the last seven presidential elections in the United States have seen the Democratic
candidate win the popular vote over the Republican candidate. (Only two major parties are
typically tolerated in the United States, though Ross Perot garnered 18.9 percent of the popular
vote in the 1992 U.S. presidential election. The exasperation from both major parties reached a
fever pitch when “spoiler” Ralph Nader had the audacity to run in the closely contested 2000
U.S. presidential election.) Structurally, the U.S. Electoral College itself obviously has roots in
slavery and shrewd political calculation - Thomas Jefferson, in fact, is known by historians as
The Negro President (e.g., by historian Garry Wills), and Federalists at the time bristled at the
so-called slave power of three-fifths representation in the Electoral College that bestowed
Thomas Jefferson the 1800 U.S. presidential election. Thomas Jefferson, the third president of
the United States and purported Author of America, is an enormously complicated figure. On the
one hand, Thomas Jefferson may have owned 600 slaves and, on the other, he harbored
genuinely egalitarian tendencies, though perhaps not as heartfeltly as Thomas Paine (cf. Rights
of Man) , and wanted a Constitutional Convention every generation to ensure freshness of ideas.
To put this all in perspective, the U.S. Constitution came into force in 1789, when about six
percent of the population had the right to vote. Property-owning white males were waved
through and permitted to vote, but Native Americans, blacks, and women were turned back, if
they weren't killed or enslaved outright. The "unalienable" rights and "self-evident" truths
"endowed" by the Almighty apparently did not initially encompass the latter three groups. White
women were permitted to vote 100 years ago in 1920 with the ratification of the 19th
Amendment, 131 years after the U.S. Constitution came into force. The “consent of the people”
was soaring rhetoric when 94 percent of the population was effectively disenfranchised. Native
Americans had to wait longer than the 14th Amendment and Indian Citizenship Act of 1924 to
receive full citizenship and suffrage in a way that mirrored the trajectories of blacks and women
achieving the franchise - gradually and with significant setbacks across individual states. The
These issues collectively revolve around how far the fabric of America extends. Alexander
Hamilton, for example, emphasized implied powers when he mentioned American empire, as
opposed to a constitutional republic outlining a separation of powers, in The Federalist Papers.
After all, who can say that providing for the common defense and promoting the general welfare
for the American people doesn’t necessitate taking proactive steps outside the explicit language
of The Constitution, assuming an implied powers interpretation of The Constitution? In The
Federalist No. 22, Alexander Hamilton refers to the fledgling constitutional republic (i.e., a form
of indirect democracy whereby representatives are theoretically elected) of the United States as
an empire, thereby anticipating the imperialist aggression of the Spanish-American War,
Philippine-American War, and annexation of Guam, Puerto Rico, Hawaii, and the Philippines by
the United States that would come decades later at the turn of the 20th century. Many historians
of vastly different political bents, such as Gore Vidal and Niall Ferguson, consider these spats
between the United States and Spain the beginnings of U.S. empire. The fin de siècle debates
over whether the United States should be inward-looking or imperialistic are exhaustively
chronicled in Stephen Kinzer's recent book The True Flag: Theodore Roosevelt, Mark Twain,
and the Birth of American Empire. U.S. imperialism got underway in earnest as Theodore
Roosevelt proclaimed that “flagrant” and “chronic” wrongdoing by a weaker Latin American
nation legitimated invasion; the United States would, of course, unilaterally decide all of this.
This ad hoc process was called Roosevelt’s corollary to the Monroe Doctrine. Stephen Kinzer,
on the whole, concluded that Americans have coexisting instincts of imperialism and
isolationism. Interestingly, the United States didn’t have to choose between the two when it
expanded its borders considerably without seafaring via, chronologically: Thomas Jefferson
inking the Louisiana Purchase in 1803 and acquiring nearly a million square miles from France
(Napoleon Bonaparte needed funds to finance the post-revolutionary conflicts), the Adams-Onís
Treaty (a.k.a., Florida Purchase Treaty) of 1819 that shored up land from Spain, and bringing
the Mexican-American War to a close with the Treaty of Guadalupe Hidalgo in 1848 and
securing land from Mexico. A spirit of manifest destiny truly pervaded 19th-century America.
After the impassioned debates about America’s part to play on the world’s stage had been
sufficiently aired, the quixotic Wilsonian quest to "make the world safe for democracy" with the
Committee on Public Information (cf. Edward Bernays’s Propaganda) was echoed by the
Truman Doctrine, fueled by the National Security Act of 1947, and belligerent Kennedy Doctrine
roughly one, two, and three generations later. One will scan the U.S. Constitution in vain in
search of justification for such globe-trotting missions, which necessitate the leadership of the
self-deputized United States maniacally scouring the entire planet for perceived deviant
behavior; the threat of communism proved useful to rally the troops for a time. Decades later,
the Reagan Doctrine and Bush Doctrine simply dispensed with a few niceties in deterring
democracy abroad (e.g., Nicaragua) and rolling back civil liberties at home (e.g., USA PATRIOT
Act), purportedly to fight the global war on terror. The concept of democracy, in principle, had
gone from a way of empowering the demos (remember that democracy derives from demos and
kratia in Greek) to dubiously legitimizing the internationally unpopular imperialist aggression of
the United States in the name of supposedly spreading democracy and, curiously, so-called free
market capitalism, which is said to be almost inexorably paired with democracy by the World
Economic Forum and International Monetary Fund (cf. Washington Consensus). In the
immediate wake of the Soviet Union’s dissolution, political economist Francis Fukuyama served
A country can have an illiberal democracy or no democracy to speak of and still have capitalism,
or a country can feature democratic institutions with nationalization of public assets, high
welfare spending, and stringently regulated trading. In spite of this apparent variance, the
evidence increasingly shows that actual democracy is incompatible with unregulated capitalism,
grotesque levels of inequality, and concentrated wealth (cf. Joseph Stiglitz’s The Price of
Inequality: How Today’s Divided Society Endangers Our Future). The bottom seventy percent
socioeconomically in the United States have scant impact on federal policy in the sense that
their wishes are not reflected in legislation; the opposite is increasingly the case as a citizen
moves from the 70th percentile socioeconomically to the 99th percentile socioeconomically.
This situation does not describe an actual democracy. Corporate capture, the manufacture of
consent, and managed democracy have rendered federal elections in the United States all but
theater (cf. Princeton’s “Testing Theories of American Politics: Elites, Interest Groups, and
Average Citizens”). Voter turnout in the United States (voter turnout percentage = raw turnout
number among voting-age population (VAP)/raw number of VAP in underlying population X 100)
hasn't exceeded more than two-thirds of the electorate at any point in the last century in terms
of these dramatized quadrennial U.S. presidential elections. That's exactly the kind of yawning
apathy one would expect in such an illiberal system awash in corporate investments and pliant
career politicians. The United States would be a wonderful place to start spreading actual
democracy but, instead, the neoconservatives and neoliberals - two terms almost tailormade to
rouse George Orwell, and perhaps even Franz Kafka, for their reversals of meaning - in the
United States have conspired to preclude actual democracy from breaking out internationally or
domestically. There might be no easy way out electorally. Radical reforms of some kind will
have to occur to sidestep the descent into corporate tyranny. Consumer advocate and civil
libertarian Ralph Nader has a series of sensible proposals in The Seventeen Solutions: Bold
Ideas for Our American Future designed to serve as a roadmap to enliven Americans’ sense of
citizenship moving forward. There’s a window for positive change, but it is quickly closing.
“Societies grow into systems. The systems require management and are therefore increasingly wielded,
like a tool or a weapon, by those who have power. The rest of the population is still needed to do specific
things. But the citizens are not needed to contribute to the form or direction of the society. The more
“advanced” the civilization, the more irrelevant the citizen becomes.” -John Ralson Saul
"What is at stake in democratic politics is whether ordinary men and women can recognize that their
concerns are best protected and cultivated under a regime whose actions are governed by principles of
commonality, equality, and fairness, a regime in which taking part in politics becomes a way of staking out
and sharing in a common life and its forms of self-fulfillment." -Sheldon Wolin
The unclassifiable ruminations of the author of the present work, Stains Upon the Silence —
something for no one, blur the lines between philosophy, cosmology, poetry and humor.
Perhaps they are conceptual free verse, surreal, pithy, sometimes sardonic. May wanders in a
hyperdimensional Garden of Forking paths throughout a hologrammic Library of Babel. Each
letter of his writings, and certainly the spaces between the letters, themselves, and all possible
combinatoric arrangements of these, are clearly isomorphic to each point in the Cosmic
hologram; linked by reverse causality to all information which exists on the future event horizon,
in this series of divagations, a conceptual Drunkard’s Walk.
If May is an atheist, then he writes only for God. Aspiring to become a popular writer, he writes
for beings which do not, and never will, exist. He recognizes that reality, a Rorschach inkblot
interpreted as if it were a geometric theorem and a geometric theorem interpreted as if it were a
Rorschach inkblot, has made parody obsolete. The libraries of the ‘future’ in each of Hugh
Everett’s Many-Worlds will be strewn with uncountable numbers of sublime corpses of amortal
beings, who spent their endless lives attempting to determine the most optimal order of the 54
factorial arrangements of the subsections of this work, in order to extract each particle of
meaning. This is certainly infinite time well spent.
“I know of an uncouth region whose librarians repudiate the vain and superstitious custom of
finding a meaning in books and equate it with that of finding a meaning in dreams or in the
chaotic lines of one’s palm...the books signify nothing in themselves. This dictum, we shall see,
is not entirely fallacious.” — Jorge Luis Borges, “The Library of Babel,” page 3
If, contra J. W. von Goethe, everything that does not exist is a symbol of the eternal, then this is
certainly true of the wisdom of the present work.
Anonymous
I’ve done some writing myself, and have found that one of the hardest things to do is sustain
humor. Good humor is more than stringing together jokes, there has to be pacing, a rise and
fall, which you do very well. I’m impressed how you find so many twists and paradoxes and
absurdities where I wouldn’t even have looked for them.
I actually think it’s more like poetry than an attempt at humor, though. I think that finding those
twists and paradoxes is how you deal with unknowable things, and make them less daunting or
even frightening. Each one encapsulates where reason isn’t good enough to lead to absolute
truth. Sometimes. Other times you are just looking for laughs. It’s good, though. High-brow
without being stodgy or tedious. They’re good laughs, too, or at least good tries at good laughs,
with a good success rate.
Someone asked me what Stains Upon the Silence: Something for No One was about. If I knew
what it was about, I wouldn’t have written it. An editor once suggested that these writings are an
admixture of what Tibetan Buddhism calls “crazy wisdom” with sane folly.
“Oh, you can’t help that,” said the Cat: “we’re all mad here. I’m mad. You’re mad.”
“You must be,” said the Cat, “or you wouldn’t have come here.”
Maybe the act of trying to understand what I have written changes its meaning, as the act of
making an observation or measurement at the quantum scale changes the very phenomenon
being observed. G. I. Gurdjieff maintained that all knowledge was material. Presumably then, if
he is correct, knowledge and information would be subject to the conservation laws of physics.
There is in fact a principle of conservation of quantum information as a consequence of two
fundamental theorems of quantum mechanics. See, e.g., “Experimental Test of the Quantum
No-Hiding Theorem,” Jharana Rani Samal, et al., Physical Review Letters.
If information in the universe is conserved, i.e., can neither be created, nor destroyed, neither
added to nor deleted, then what are the implications of this for the acquisition of knowledge by
individuals and perhaps even of wisdom, however defined, or for the persistence of memories?
y Richard May
Forethought and Afterword for Stains Upon the Silence b
Adam Kisby
Today, I met with a self-described alien. I traveled deep inside a mountain to find him. We
discussed a variety of topics including the trauma of pre-existence, the Matrioshka nature of the
subtler spiritual bodies, the precise duration of the timeless Bardo state, birth as death, death as
birth, alpha as preceding beta, beta waves as preceding alpha waves, physical immortality as a
practical means by which to attain enlightenment, and enlightenment as a practical means by
which to attain post-existence. We discussed synchronicity and mere coincidence, noting that
perceptions of the same are largely a matter of attention, but that, generally speaking, events
must exist in order to be attended to. We agreed that certain license plates have a lot to say, but
that they're limited to using such words as "nomads" when perhaps "peripatetics" would be
better, due to their having far fewer characters than a standard Twitter post. He engages in
terrestrial spiritual practices but has abandoned sitting as pedestrian, preferring instead
standing or even walking. His writings, consisting mostly of observations on the human
condition in the form of Monarch notes on haikus, are less obscure, in fact, than the magna opi
of some Earth prodigies. He spoke of commissioning me to compile his writings, half-despairing,
like the Buddha, that so few would understand. There was no talk of Tibetan music or
strawberry ice cream, but his eyes grew wide at his own mention of Roquefort cheese. He
offered me Bing cherries or walnuts, but he was careful not to offer me any Roquefort. It's
unclear to me if this was due to his own deliberate attachment to it or his consideration of my
difficulties in overcoming a pernicious casomorphin addiction.
Adam Kisby
Directions:
The relationship between intelligence and belief is complex. This survey is designed to discover
the extent to which members of the high-IQ community believe “weird things” (in the sense of
Shermer).
First, list any high-IQ societies of which you have ever been a member (high-IQ society
membership is not required to complete the survey).
Second, rate each of the 100 belief statements on a scale of 1 to 7, according to the following
table:
1 = Strongly Disagree
2 = Moderately Disagree
3 = Mildly Disagree
4 = Ambivalent
5 = Mildly Agree
6 = Moderately Agree
7 = Strongly Agree
The belief statements are phrased as precisely as possible, so be sure to answer them exactly
as they are written.
For any belief statement that contains an unfamiliar reference, take a moment to look up the
reference online before choosing your response.
Feel free to include any explanations or other comments that you wish to share.
Third, send your complete set of responses (list of high-IQ society memberships, belief
statement ratings, and any explanations or other comments) to [email protected].
___ 2. Former President Jimmy Carter actually saw the planet Venus.
___ 6. Doug Bower and Dave Chorley have stated publicly that crop circles attributed to
extraterrestrial biological entities were really made by them using a plank of wood and some
rope.
___ 7. A species of primate unknown to mainstream science lives in the forests of the Pacific
Northwest.
___ 8. The son of Ray Wallace has stated publicly that footprints attributed to Bigfoot were
really made by his father using a pair of carved wooden feet.
___ 9. A thorough search of Loch Ness using hundreds of sonar beams and satellite tracking
failed to prove the existence of the Loch Ness monster.
___ 10. A plesiosaur (or similar organism, unknown to mainstream science) really lives in Loch
Ness in Scotland.
___ 11. A thorough search of Carl Sagan’s garage failed to prove the existence of a
fire-breathing dragon.
___ 12. A sauropod (or similar organism, unknown to mainstream science) really lives in the
vicinity of the Congo River in Africa.
___ 13. There is not a Flying Spaghetti Monster in the sky that grants some of the people some
of their wishes some of the time.
___ 14. A buoyant betentacled blob (or similar organism, unknown to mainstream science)
actually feeds on the psychic energy of human beings as it floats near the ceilings of crowded
movie theaters.
___ 15. Human behavior is measurably influenced by the relative positions of celestial bodies.
___ 16. The gravitational force exerted by the planet Mars on an infant at the moment of its birth
is theoretically less than the gravitational force exerted on that infant by its mother at the same
moment.
___ 18. The relative position of the planet Mars at the moment of the birth of an individual is
significantly correlated with the eventual athletic eminence of that individual.
___ 19. The Earth and other planets revolve around the Sun.
___ 20. The Sun and the planets revolve around the Earth.
___ 21. Solutions with solutes at concentrations of less than one part in 1060 can measurably
improve human health.
___ 22. The placebo effect explains why many ineffective treatments seem to be effective.
___ 23. Cold fusion of the sort reported by Fleischmann and Pons is not real.
___ 24. Two nuclei have fused into a single nucleus at temperatures below one hundred
degrees Fahrenheit.
___ 25. Human beings have been transported from one location to another location ten miles
away in less than one second.
___ 26. That quantum teleportation has been effective over a distance of ten miles does not
imply that it is possible to teleport human beings over such distances.
___ 27. That positrons are said to travel backward in time does not imply that human beings can
travel into the past.
___ 28. Human beings can travel into the future whether or not time machines already have
been invented.
___ 29. So-called remote viewing is not a reliable means by which to discover information about
distant places.
___ 30. There are people who can reliably discover information about distant places using
psychic abilities.
___ 31. So-called precognition is not an accurate means by which to predict the future.
___ 32. There are people who can accurately predict the future using psychic abilities.
___ 33. There are people who can move physical objects by thinking about them.
___ 35. There are people who can know the unexpressed thoughts of others.
___ 36. Verbal and nonverbal communication account for all instances of apparent telepathy.
___ 37. There are visual and auditory hallucinations that result from consuming or neglecting to
consume psychotropic substances.
___ 38. There are people who can see the spirits of deceased human beings.
___ 39. There are people who can talk to the dead.
___ 40. Cold reading techniques are used to manipulate grieving widows.
___ 41. Subliminal messaging is not a reliable means by which to influence the behavior of
others.
___ 42. There are people who can influence the behavior of others using psychic abilities.
___ 46. The Universe is between ten and twenty billion years old.
___ 47. Bertrand Russell’s teapot once passed somewhere between Earth and Mars.
___ 48. The God of Abraham, Isaac, and Jacob never existed in the historical sense.
___ 49. There is a man in the sky who grants some of the people some of their wishes some of
the time.
___ 50. The God of Abraham, Isaac, and Jacob does not answer prayer.
___ 51. Jesus Christ lived, died, and rose from the dead.
___ 53. True memories require physical brains of some kind for their encoding, storage, and
retrieval.
___ 54. There are people who have memories of past lives.
___ 56. The consciousness of an individual ends with the death of the physical body of that
individual.
___ 57. The Face on the planet Mars is a monument of an ancient Martian civilization.
___ 58. The Face on the planet Mars appears due to optical phenomena.
___ 59. High technology did not exist before the modern era.
___ 60. Nuclear reactors have been discovered on Earth that pre-date modern civilization.
___ 61. According to the International Astronomical Union’s definition of the term planet, there
are currently eight planets.
___ 62. According to the International Astronomical Union’s definition of the term planet, there
are currently twelve planets.
___ 63. The Anunnaki mentioned in ancient Sumerian texts continue to intervene in the affairs
of human beings.
___ 64. The Twelfth Planet in the sense made famous by Zecharia Sitchin never existed in the
historical sense.
___ 65. Chemtrails are sprayed at high altitudes from officially unacknowledged tanker aircraft
for the purpose of climate modification.
___ 66. Chemtrails are not sprayed at high altitudes from officially unacknowledged tanker
aircraft for the purpose of mind control.
___ 67. The Twin Towers fell at rates consistent with accepted principles of physics on
September 11th, 2001.
___ 68. The destruction of the Twin Towers on September 11th, 2001 was the result of a
conspiracy.
___ 69. The destruction of the Twin Towers on September 11th, 2001 was perpetrated by
members of the militant Islamist organization known as al-Qaeda.
___ 70. The destruction of the Twin Towers on September 11th, 2001 was part of a false flag
operation perpetrated by agents of the government of the United States.
___ 72. I have had an experience that could be plainly described as an encounter with an
extraterrestrial biological entity.
___ 73. I have not had an experience that could be plainly described as an encounter with
Bigfoot, the Loch Ness monster, or some other legendary cryptid.
___ 74. Given the number of new species that have been discovered in recent years, the only
logical conclusion is that Bigfoot, the Loch Ness monster, or some other legendary cryptid
exists.
___ 75. Considering how many scientific discoveries have been opposed by mainstream
science, it is unreasonable to assume that cold fusion of the sort reported by Fleischmann and
Pons is not real.
___ 76. I have direct knowledge that cold fusion of the sort reported by Fleischmann and Pons
is real.
___ 77. I have had an experience that could be plainly described as an instance of telepathy,
telekinesis, or some other psychic phenomenon.
___ 78. Given how little human consciousness has been studied by mainstream science, the
only logical conclusion is that telepathy, telekinesis, or some other psychic phenomenon is real.
___ 79. Considering the number of natural phenomena that appear to be intelligently designed,
it is unreasonable to assume that the God of Abraham, Isaac, and Jacob does not exist.
___ 80. I have not had an experience that could be plainly described as an encounter with the
God of Abraham, Isaac, and Jacob.
___ 81. I do not have direct knowledge that the destruction of the Twin Towers on September
11th, 2001 was part of a false flag operation.
___ 82. Considering the number of false flag operations that have been proposed by the CIA, it
is unreasonable to assume that the destruction of the Twin Towers on September 11th, 2001
was not part of a false flag operation.
___ 83. The idea that extraterrestrial biological entities are interacting with human beings is
more frightening than fascinating.
___ 84. The idea that legendary cryptids are sighted by human beings is more fascinating than
frightening.
___ 86. The idea that there are people who have psychic abilities is more fascinating than
frightening.
___ 87. The idea that the God of Abraham, Isaac, and Jacob exists is more frightening than
fascinating.
___ 88. The idea that there are conspiracies behind events such as the destruction of the Twin
Towers on September 11th, 2001 is more frightening than fascinating.
___ 89. It is better to have a small amount of information about which one is certain than to
have a large amount of information about which one is uncertain.
___ 90. It is better to believe that an idea is true that later turns out to be false than to believe
that an idea is false that later turns out to be true.
___ 91. Data should be rejected when they contradict scientific theories.
___ 92. Scientific theories should be modified when they contradict data.
___ 95. A potentially infinite number of explanations may be proposed that are consistent with
any given set of data.
___ 99. Man's mind once stretched by a new idea never regains its original dimensions.
___ 100. Keep an open mind, but not so open that your brains fall out.
Ken Shea
Arthur Schopenhauer was a German 19th-century philosopher who was inspired by a chorus of
different voices: Plato, Immanuel Kant, David Hume, George Berkeley, Buddhism, the
Bhagavad Gita, and the Upanishads o f the Hindu Vedas. In a Positivist age, Schopenhauer
propounded an unpopular type of metaphysical voluntarism which maintained Will informed the
core of reality and intellect represented a mere secondary phenomenon. The epistemological,
ontological, aesthetic, and ethical ramifications of this seemingly disempowering, and essentially
godless, double-aspect theory are tenderly unpacked in Schopenhauer's masterpiece, The
World as Will and Representation. Therein, Arthur Schopenhauer essentially flavors Immanuel
Kant's epistemological idealism with a fresh, some would say disturbing, form of ontological
realism by claiming Will stands outside space, time, plurality, and the principle of individuation
“as thing in itself and therefore imperishable.” Schopenhauer's double-aspect theory echoes the
Hindu Upanishads in this seminal double-aspect dimension, n.b., the dueling monist and dualist
schools in connection with Brahman and Atman in Hinduism.
Some, retrospectively, consider Arthur Schopenhauer the first Western philosopher to seriously
incorporate Eastern mystical-philosophical conceptions into a fleshed-out philosophical system
encompassing manifold dimensions of metaphysics. As Schopenhauer propounded that Will
was a blind and striving force (a fundamentally irrational "dark, dull driving") without any aim or
ultimate satisfaction in mind, he thought the ideal strategy for lessening suffering was lessening
involvement with Will through worldly resignation à la Christian Quietism, compassion, aesthetic
contemplation, and sexual abstinence. Christian Quietism might be defined here as the
devotional contemplation and the renunciation of the will; more colloquially, quietism is the
composed acceptance of that which one cannot change. Bertrand Russell, in A History of
Western Philosophy, distilled the essence of Schopenhauer's ethical advice for living relatively
peaceably in the whirlwind of Will and worldly strife by saying: "The cause of suffering is
intensity of will; the less we exercise will, the less we shall suffer" (Russell, pg. 756).
The metaphysical affinities between Schopenhauer's Will and the three marks of existence in
Buddhism (viz., anicca, dukkha, and anatta or impermanence, suffering, and non-self,
respectively) are certainly compelling, though Schopenhauer seems irretrievably pessimistic that
something like Buddhism's Noble Eightfold Path could light the way to permanent liberation,
curtail rebirth in samsara, or transport devotees to rapturous divine union. Within the essay “On
the Vanity of Existence,” Schopenahuer makes a bleak ontological argument and asserts, “the
vanity finds expression in the whole way in which things exist.” Schopenhauer might, therefore,
look askance at the soteriology, as opposed to the handy myth, of Nirvana in Buddhism, the
so-called summum bonum of the Noble Eightfold Path - etymologically, Nirvana means “be
extinguished” or “to blow out” in Sanskrit and has parallels to moksha in Hinduism - and treat
permanent liberation as deus ex machina. Because Schopenhauer saw Will as a pernicious yet
central force of this world (“thing in itself and therefore imperishable,” after all), permanent
liberation as peddled by spiritualists would be precluded a priori unless one somehow drastically
altered the fundamentals of reality. Put another way, a Buddhist monk still feels occasional
hunger pangs no matter the monk’s belief system or relationship to sunyata in Schopenhauer’s
metaphysics because of said metaphysics’s ontological architecture built around Will.
David Benatar is a contemporary South African moral philosopher who says that there is a
moral obligation not to procreate because of the foregoing asymmetry between pleasure and
pain. There is simply too much suffering in the world (e.g., more than 20,000 people die from
hunger or malnutrition every day), and “a charmed life is so rare that for every one such life
there are millions of wretched lives” (Benatar, pg. 92), to morally justify procreation. Moreover,
David Benatar makes the case that self-assessments of well-being are significantly skewed
towards the positive end of the spectrum and basically unreliable. The so-called Pollyanna
Principle, for instance, is a staple of human psychology that causes humans, presumably for
evolutionary reasons, to bias their judgements towards a positive self-assessment for past,
current, and future states. Research shows that people tend to recount many more positive than
negative events, subscribe to being “pretty” or “very” happy in the current moment, and
anticipate glad tidings in the years ahead no matter what the actual situation. There is also a
psychological tendency towards adaptation whereby a person will quickly weather a bad
situation in the present and alter their expectations in the future by, in effect, establishing a new
baseline and papering over the past via the Pollyanna Principle (cf. hedonic treadmill). The
implications for a potential parent’s assessment of the future c hild’s presumed well-being are
immense, insofar as those assumptions are predicated on established psychological biases.
One such psychological bias is comparison: bolstering one’s self-assessment by subjectively
comparing one’s life to the lives of those around one, as opposed to providing an objective
assessment of one’s own life on its unique merits. The upshot of this psychological bias of
comparison is that one will tend to focus on differences between oneself and others, which
glosses over collective suffering because it is ubiquitous (cf. Freudian narcissism of small
differences). The preceding psychological mechanism interacts with the Pollyanna Principle
such that people not only fail to objectively assess their own lives but, also, compare
themselves to others in a way that proves flattering or egosyntonic, e.g., by more frequently
comparing themselves to those who are worse off than themselves rather than those who are
better off than themselves. The grounds for such comparisons are seemingly interminable.
In Chapter Three of Better Never to Have Been: The Harm of Coming Into Existence, David
Benatar directly addresses Arthur Schopenhauer’s metaphysics by saying, “Life, on the
Schopenhauerian view, is a constant state of striving or willing - a state of dissatisfaction”
(Benatar, page 76). David Benatar, then, lucidly interprets Arthur Schopenhauer by continuing
to report that, “On the Schopenhauerian view, suffering is all that exists [for sentient creatures]
independently” (Benatar, page 77). David Benatar espouses a different, and probably more
sophisticated and scientifically supportable, perspective as he concedes that there are indeed
Ironically for someone who loathed myriad aspects of Christianity ("mere despotic theism"),
Arthur Schopenhauer's philosophy of resignation enjoys remarkable affinity with 17th-century
Christian Quietism in its renunciation of the will. Arthur Schopenhauer, accordingly, advises
treating others kindly in the fourth and final book of The World as Will and Representation in
order to temporarily f lee the corrosive effects of pure egoism and mitigate suffering in this
lifetime. If seen through, Schopenhauer reckoned once the veil of Maya was lifted - Maya
obscures a monist conception of Brahman in the Advaita Vedanta (meaning “non-duality” in
Sanskrit) school of Hindu philosophy - through denial of the will and an insight of love,
phenomenal boundaries would dissolve, leaving behind further silencing of volition and a
sympathy for sufferers. Still, the reader can almost hear the sigh as Schopenhauer writes, "It is
all one whether he has been happy or miserable; for his life was never anything more than a
present moment always vanishing; and now it is over." In light of these realities, treating others
kindly in the present and waving off the asinine hustle and bustle of the world doesn't seem
irrational, unlike Will, which relentlessly cracks the whip on sentient creatures who are promised
certain hardship and unfulfilled desires before returning to the blessed calm of non-existence.
"In early youth, as we contemplate our coming life, we are like children in a theatre before the curtain is
raised, sitting there in high spirits and eagerly waiting for the play to begin. It is a blessing we do not know
what is going to happen." -Arthur Schopenhauer
“There are moments, perhaps even periods, of satisfaction, but they occur against a background of
dissatisfied striving.” -David Benatar
Sources:
Benatar, David. Better Never to Have Been: The Harm of Coming Into Existence. New York, New York. Oxford
University Press. 2006.
Russell, Bertrand. A History of Western Philosophy. New York, New York. Simon & Schuster. 1972.
Schopenhauer, Arthur. Essays and Aphorisms. New York, New York. Penguin Classics. 1973.
Schopenhauer, Arthur. On the Fourfold Root of the Principle of Sufficient Reason. Peru, Illinois. Open Court
Publishing Company. 1974.
Schopenhauer, Arthur. World as Will and Representation (Volume 1). New York, New York. Dover Publications.
1966.
Schopenhauer, Arthur. World as Will and Representation (Volume 2). New York, New York. Dover Publications.
1966.
May-Tzu
As an Omni Amorist and a Multi Omni I have little sympathy for you straight-laced uptight Bi
Poly Amorists, who whine about a lack of societal acceptance. We Omni Amorists who are Multi
Omni have a lack of acceptance by the very laws of physics. Moreover, you hidebound orthodox
Bi Poly Amorists are only interested in the macro-level of phenomena. We go after the delicious
sub-quantum phenomena too and bonding by strong and weak forces, not handfasting or
marriage. Young juicy neutrinos and tight little photons are not as grave as gravitons.
You “Bi”s are so straight. Have you ever felt the exquisite sensation of the annihilation of matter
and antimatter or listened to the sounds of release of Hawking radiation, as your soul penetrates
a black hole? You never have to deal with wave-particle duality or the Heisenberg uncertainty
principle in your so-called Poly Amory. You don’t have a clue what it’s like to get a juicy young
neutrino in bed only to learn that when you discover her location in space-time, you don’t know
how fast she’s moving or – Damn that Heisenberg! You go to bed with a gamma radiation
photon and afterward she’s a particle of different charm and spin, not to mention ‘color’, even in
the quantum world. Her charm can make your head spin. And Pauli was no Poly with his
exclusive “exclusion principle.” Try having a quantum non-local relationship with a succulent mu
meson only to find her decaying into an entire family of bizarre vibrating Strings. (Incidentally,
were the vibrating membranes of M-theory circumcised on the eight day? Is the Multiverse,
itself, Jewish?) You get strung along by String Theory and how do you compete with the Big
Bang? Try competing with the Big Bang in bed.
I love every wave-particle in the quantum foam, just as long as the feeling lasts beyond
space-time, or until my attention wanders, whichever comes first, true objective love. But what
about objective lust, where is it to be found? Alas, there are so many warps in the space-time
continuum, vibrating Strings playing the music of Hermes and Pythagoras, as juicy wavicles
dance seductively, and so few views from eternity.
May-Tzu
Zeno of Elea reportedly had a calendar on which he scheduled special events, such that he
could only approach halfway to an event and then halfway again. The events on his schedule
receded asymptotically to Zeno’s own event horizon. While Zeno appeared to be frenetically
busy to himself, he generally appeared to be a puddle of Bose–Einstein condensate to an
outside observer. In this way Zeno was unaffected by his own death, which only occurred
beyond his event horizon.
May-Tzu