The math about the way the Higgs forms when 8-dim spacetime is reduced to 4-dim spacetime can be found on my web site at
The math proof was not done by me, but by Meinhard Mayer who used Proposition 11.4 of chapter II of the book Foundations of Differential Geometry, vol. 1, John Wiley 1963, by Kobayashi and Nomizu.
My effort was to give Mayer's math proof a physical interpretation as part of my physics model. I think that Mayer is now a professor emeritus of physics and math at U.C. Irvine.
The geometric volumes related to the physics groups that I use, along with some combinatoric rules, produce the force strengths and constituent masses that are seen in physics. My two most recent papers about that can be found at
As long as my physical representation of math objects are used consistently with the structure of the math objects,
that is all that is needed for physics model building. Conventional physicists often do the same thing. For example, AFAIK nobody has given any "math proof" justifying the use of path integral quantization of the Standard Model Lagrangian, but conventional physicists do use that all the time.
However, there is one part of my model where I am using a math conjecture that I have not proven, which is that a lot of real 8-dim Clifford algebras when put together form a generalization of the Hyperfinite II1 von Neumann factor. See
I would like to prove that conjecture, or see somebody else prove it, but I have not yet had the time to do it.
As to whether there are "... any new prediction behind your theory ...", the answer is yes. Recently I have put two papers on my web site at
They describe my model of the Tquark - Higgs - Triviality system, and they predict that the LHC will see three states of the Higgs boson:
Since the LHC should be able to see any Higgs state below about 250 GeV soon (within a couple of years or so) after it begins to get results in 2008, my masses of three Higgs states are predictions that should be tested then. There have been some very inconclusive results from Fermilab that encourage me, which I have discussed at
In my model you have (in the fundamental first generation) 8 fermion particles and 8 fermion antiparticles. In the binomial expansion used in my 8-dim Clifford algebra you have
1 8 28 56 70 56 28 8 1
So, maybe only in the special dimensions of my model (8 reduced to 4 of spacetime) can you formulate bosons as fermion-antifermion pairs.
As to how "... 8 of the  2-grade bivectors [of the 8-dim real Clifford algebra] AFTER DIMENSIONAL REDUCTION TO 4-D PHYSICAL SPACETIME corrspond to the 8 generators of color for SU(3) ? ...", the process is set out in some detail on my web site at
and from some other equivalent points of view at
Further, this text from e-mail messages I sent to Garrett Lisi back in September 2005 might be useful:
"... There are several levels at which you can look at the Spin(8) group associated with Cl(8). (Here I am using Euclidean signature being sloppy for simplicity.) ... The Lie algebra of Spin(8) is generated by the Weyl symmetries of the root vector polytope of Spin(8), which is the 24-cell in 4-dim Euclidean space. So, you can describe Spin(8) in three ways: As a Lie group. As its Lie algebra. As its root vector polytope. What I agree cannot be done is to factor the Lie group Spin(8) into the Cartesian product of Lie groups U(4) x SU(3) x SU(2) x U(1). ... However, what I contend CAN be done, is to do the decomposition at the level of the root vector polytope, ... the way I decompose the 24-cell plus 4 Cartan dimensions for 28-dim Spin(8) into 12-vertex cuboctahedron plus 4 Cartan dimensions for 16-dim U(4) and 8-vertex cube for SU(3) and line with 4 vertices for U(2) = SU(2) x U(1) is an unambiguous procedure ... However, it requires thinking of the Gauge Groups not as Lie groups, and not even as Lie algebras generating them, but as the root vector polytopes that generate the Lie algebras. Such a way of thinking is clearly defined mathematically, as in texts that describe how to construct Lie algebras from the root vector polytopes. For example, see Chapter 21 (especially section 21.3) of the book Representation Theory by Fulton and Harris (Springer 1991) which does that using the Dynkin diagrams that are associated with the root vector polytopes and therefore the Lie algebras. ... However, such a way of thinking is not common for mathematicians, much less for physicists. Even though it is not a common way of thinking, it does work, in the sense of producing a realistic physics model, and it works much better than the more well-known way of avoiding the Coleman-Mandula theorem by using Lie superalgebras instead of Lie algebras. ... In other words, my work began in the early 1980s, and was motivated by supergravity. I wanted to make a better and more radical departure from Lie group/algebra than the supergravity departure to Lie superalgebra. I read a paper by Saul-Paul Sirag about physics and the Weyl group, and I realized that my departure could be going down to the Weyl group / root vector level, so I began to work on it, and (even though my then-advisor David Finkelstein was unenthusiastic about it) my work on the Weyl group / root vector stuff led me to my present realistic model. ... AFTER you use the root vector process for the decomposition, THEN AND ONLY THEN do you construct Lie algebra/group structures from the decomposed root vector polytopes, and then you have the conventional MacDowell-Mansouri gravity from the U(2,2) and the Standard Model from the SU(3)xSU(2)xU(1), each in their own sandbox (in the computer system sense of the term "sandbox") ...".
and whether "... there ... Is ... a way to associate to each one of the 256 CA a matrix and stablish a kind isomorfism between Cl(8) and the 1-D CA ? ...".
Effectively, Wolfram does that in his book by assigning to each such automaton a unique number between 0 and 255, which can be written in 8-digit binary form, because the resulting 256 binary numbers can be assigned to grades in the 256-element Clifford algebra according to how many 1s they contain. If the number of 1s had been used, the result would be the Hodge dual of Cl(8), which is also isomorphic to Cl(8).
As to whether I think that all "... the ... CA ... giving ...[the same]... [graphic] results ... [should be seen as equivalent because they] ... give the same information ...",
No, because even if they give the same graphic results, their rules are really fundamentally different, which is why they are assigned different 8-digit binary numbers. However, having the same graph does indicate that their physical properties may be similar. For example, 11000000 corresponds to the U(1) photon and 10100000 corresponds to the SU(2) neutral weak boson, and they both have the same graphic picture of just one point and then blank, which for convenience I will call a "blank" graph, and the SU(2) neutral weak boson acts very much like a photon except that at the low energies where we do experiments the Higgs mechanism gives mass to the SU(2) neutral weak boson, so it acts physically somewhat like a "heavy photon".
As to "... why ... [I] ... consider those 8 2-grade bivectors as the generators of color force SU(3) and not others of the 2-grade bivectors ? ...", I did not do a nice math proof (I think that one could be done, but I have not taken the time to do it yet), but just intuitively looked at the pictures. The breakdown of the 28 that I wanted was:
and the other 28 - 12 - 4 = 12 should correspond to
(You can see that my choices fit those criteria.)
Note that the 8 graphs for SU(3) look different from the other 16+1+3 = 20 in that they have more "volume", which I think is related to the fact that in my physics model the SU(3) acts globally on the internal symmetry space while the SU(2) and U(1) act internally on the internal symmetry space and the U(2,2) of gravity acts internally on the physical space.
Segrob Siul sent a nice list of problems, which I see as:
Here are some comments on them:
but I do not have any more formal "proof" of why that assignment should be chosen (other than the fact that it works in the sense that it produces realistic particle masses, force strengths, etc).
As to "... where to place in time the initial condition ... How [my] model reconciles with the Big Bang theory ? ...", as Segrob Siul says, in my model "... the Universe starts from the voids by introducing a binary choice ...". Roughly, what happens is that before any space or time or gauge bosons or fermions exist:
The12-step program above built our universe out of just one point out of the huge El Aleph, so that El Aleph is really big and really comprehensive, like the Vedic unified Krishna.
PS - As to why I use Clifford algebra, it is because other math structures seem too limited. For instance: Lie algebras are included in the Clifford algebra bivectors, and so are not as comprehensive; Exterior (Grassmann) algebras don't have spinors; etc. Maybe that is not a formal justification, and I should just say that after trial and error, the Clifford algebras work in the sense that the physics model constructed with them gives the "right answers" when compared with experimental results (which I regard as the voice of g-d). PPS - You might also ask why I use Lagrangian structure, and I can only say that to me Lagrangians seem natural (there are physicists who disagree and don't like Lagrangians, but here, for once, I am on the side of the conventional physicists), and Lagrangians are a very effective way to describe the physics that we see in experiments.
As to using the isomorphism between CAs and Cl(8) to construct an "... operation between CAs so that computer experiments can be performed and reproduce the results from ...[my]... model (locally, because the isomorphism is with Cl(8) ) ...", that would be nice, and it need not be only local because tensor products of Cl(8) make up larger-than-local neighborhoods and tensor products of the CA operation should be definable and workable.
On a large scale, what I would like to see would be to use such computer experiments to describe basins of attraction etc to see how our future (or possible futures) might look, by considering our futures to be described by quantum game theory played out among the possibilities of Many-Worlds quantum theory. For a very much oversimplified example see http://www.valdostamuseum.org/hamsmith/ManyFates.html#example
Such basins of attraction have been described for individual CAs (see for example the book "The Global Dynamics of Cellular Automata" by Wuensche and Lesser (Addison-Wesley 1992)) and it would be fun to see that for lots of CAs acting as Clifford algebra elements.
On a smaller scale, maybe such an operation would allow computer CA experiments that would make difficult calculations (such as QCD SU(3) color force calculations) easier. As to whether that approach to QCD might work, when I look at 2D successes with respect to fluid dynamics (see the book "Lattice Gas Methods for Partial Differential Equations" ed. by Gary Doolen (Addison-Wesley 1990)) I get optimistic, but when I note that Wolfram even as late as his New Kind of Science book seems to have made no substantial progress with respect to QCD calculations, I get pessimistic. However, maybe Wolfram never thought about combining CAs with Clifford algebras and using the resulting operation.
As to "... whether Clifford algebra could be applied to cellular automata in Planck scale geometry, whether e.g. each Planck volume could be considered a discrete cell in a cellular automaton. ...", that seems to me to be related to the work of Paola Zizzi on the universe as a big quantum computer. In checking the web, I saw at http://www.quantumbionet.org/eng/index.php?pagina=60 a description of her current work, and an article The "Poetry of a Logical Truth" by her at http://www.quantumbionet.org/eng/index.php?pagina=84
As to Paula Zizzi saying "and corresponds to a superposed state of 10^9 quantum registers", which is much smaller than the number of tubulins in the human brain, she and I discussed that back in 2000 and we agreed, as she said, that "... As far as the number of tubulins is concerned ... the total number of them in our brain is 10^18 ... the selected quantum register, which is the n=10^9, contains 10^18 qubits...".
Here is why I have been using 10^18 tubulins per human brain: The human brain contains about 10^11 neurons; and there are 10^7 Tubulin Dimers per neuron. As references, see the Osaka paper QUANTUM COMPUTING IN MICROTUBULES - AN INTRA-NEURAL CORRELATE OF CONSCIOUSNESS? by Stuart Hameroff in which he mentions: "... the human brain's 10^11 neurons ..." and the Orch OR paper http://www.quantumconsciousness.org/penrose-hameroff/orchOR.html Orchestrated Objective Reduction of Quantum Coherence in Brain Microtubules: The "Orch OR" Model for Consciousness by Stuart Hameroff and Roger Penrose in which they say: "... Yu and Baas (1994) measured about 10^7 tubulins per neuron. ...". Their Yu and Baas (1994) reference is "... Yu, W., and Baas, P.W. (1994) Changes in microtubule number and length during axon differentiation. J. Neuroscience 14(5):2818-2829....".
The Osaka paper was on the web some years ago, and I downloaded its text back then, and my quote from it is from that text download. I don't know exactly where to find it on the web nowadays. A reference for the number of neurons that is on the web now is PHYSICAL REVIEW E, VOLUME 65, 061901(Received 2 May 2000; revised manuscript received 7 August 2001; published 10 June 2002) Quantum computation in brain microtubules: Decoherence and biological feasibility by S. Hagan, S. R. Hameroff2 and J. A. Tuszynski at http://www.quantumconsciousness.org/pdfs/decoherence.pdf where they mention "... about 10^8 neurons approximately 0.1%-1% of the entire brain) ...".
The Samsonovich-Hameroff et al ideas about patterns of excited tubulins in a microtubule do remind me of the cellular automata patterns that may be isomorphic to the 256 elements of the Cl(8) Clifford algebra. Where my ideas may differ from Samsonovich-Hameroff et al may be in their ACT (acusto-conformational transformation) mechanism by which MTs communicate with each other. My idea is that the communication is by resonant gravity connection. It is based on Penrose's use of gravity as an Orch OR mechanism and on generalizing to gravitational gravitons Carver Mead's description of resonance causing atomic emission of electromagnetic photons - see http://www.valdostamuseum.org/hamsmith/QuanConResonance.html#resonance
Samsonovich-Hameroff et al describe two ways for ACT to work:
I think that 1 is too slow in propagating throughout the brain. 2 is basically the picture that I have, but I use resonant gravity as the basic underlying force connecting all the MTs, and although they do use Penrose's idea of gravity for Orch OR collapse, I think that they may not use gravity to maintain coherent superpositions of MTs throughout the brain.
As to why I think that 1 - "... two neighboring coupled MTs (or two parts of the same MT) ..." may be too slow to be a way for ACT to communicate among MTs to maintain superposition coherence throughout the brain, Samsonovich-Hameroff et al say about that mechanism: "... global coherent oscillations are initiated spontaneously by thermal fluctuations and amplified by energy release due to conformational motions stimulated by these oscillations ...". I think that the thermal fluctuations have a time-scale, and their generation of oscillations introduces another time-delay, and the conformational motions introduce yet another time-delay, and all those time factors are much slower than graviton speed-of-light connections, and are when added up slower than the ambient thermal fluctuations that can make a superposition decohere in the way used by Tegmark in his criticism of quantum consciousness, so it seems to me that mechanism 1 is doomed to failure as a means of maintaining superposition coherence.
In my picture, each tubulin emits and absorbs gravitons (at speed of light) from every other tubulin involved in the superposition, so the gravitons keep all the tubulins in step to be in coherent superposition by exchanging gravitons much faster than the time-scale of the thermal fluctuations that Tegmark tries to use for decoherence. Since the tubulins interact much faster than the thermal fluctuations, they can easily evade any decohering effects related to the thermal fluctuations.
An analogy occurred to me. Consider the USA stealth aircraft F117. Before its development, fixed-wing aircraft were generally aerodynamically stable in that, left alone, they tended to continue on a predictable path. However, the F117 was aerodynamically unstable in that, left alone, small fluctuations of turbulence would destabilize the aircraft, and no human pilot could react fast enough to correct those instabilities, so with only a human pilot it would not continue on a predictable path (and would likely crash). Due to its instabilities, it was known to its test pilots as the "Wobblin' Goblin". Only with the development of automated computer control systems that reacted much faster than the time scale of turbulent fluctuations could such an aircraft be useful to an air force. Since such reaction time was far faster than any human reaction time, some test pilots quit because there was no way they themselves could be in "control" of the aircraft. At the risk of belaboring the obvious:
turbulent fluctuations that = thermal fluctuations that Tegmark used to destabilize the F117 argue for decoherence slow human reflexes (slower = slow processes (not much faster than than the turbulent fluctuations) thermal fluctuations) fail to stop fail to stabilize the F117 Tegmark-type decoherence fast computer control = fast processes (carried by speed-of-light systems (much faster than gravitons) allow maintenance of the turbulent fluctuations) coherence of MT state superpositions can and do stabilize the F117
In one of his papers known as a "water paper" (I downloaded it years ago, but the link I used then seems to be invalid now) Stuart Hameroff says: "... Herbert Frohlich, an early contributor to the understanding of superconductivity, also predicted quantum coherence in living cells (based on earlier work by Oliver Penrose and Lars Onsager ... Frohlich ... theorized that sets of protein dipoles in a common electromagnetic field (e.g. proteins within a polarized membrane, subunits within an electret polymer like microtubules) undergo coherent conformational excitations if energy is supplied. Frohlich postulated that biochemical and thermal energy from the surrounding "heat bath" provides such energy. Cooperative, organized processes leading to coherent excitations emerged, according to Frohlich, because of structural coherenceof hydrophobic dipoles in a common voltage gradient. ...".
That would be using electromagnetic photon processes to maintain the coherence, although they may use gravity for Orch OR decoherence. I prefer to use gravity for both things.
As to the ideas described above as Frohlich's, I see a problem with his source of energy for a driven non-equlibrium system: How can the Frohlich energy source (surrounding heat bath) produce a coherence that is stable against a decohering influence (the same surrounding heat bath) that is just as strong as the energy source ? Note that this situation is VERY different from the sun and plant life which was mentioned by Penrose in Emperor's New Mind. At page 320, Penrose says "... All this is made possible by the fact that the sun is a hot-spot in the sky! The sky is in a state of temperature imbalance: one small region of it, namely that occupied by the sun, is at a very much higher temperature than the rest. ... The earth gets energy from that hot-spot in a low entropy form ... and it re-radiates to the cold regions in a high-entropy form ...". Unlike the sun in the sky, Frohlich's surrounding heat bath source is at the SAME temperature as the rest of the brain (sky). For the brain to work like the sun and sky, you will have to find a part of the brain (sun) that is a lot hotter than most of the brain (sky). That should be easy to find experimentally, and in fact I seriously doubt that it exists, because it should be so easy to find that it would have already been found if it existed. So, I don't like the Frohlich electromagnetic coherence mechanism for maintaining brain-wide coherent states of MTs. However, I should day that electromagnetic processes are useful and may play some other roles in brain function.
I found some work of Nanopoulos et al (as to whether it is original or application of ideas of others, I do not know) to be interesting, particularly application of "... error- correcting mathematical code known as the K2(13, 2^6, 5) code. ..." to MT information.
For maintaining a coherent superposition it is indeed 'the more, the better' because if they are linked together (as in Carver Mead's book Collective Electrodynamics) anything trying to decohere the superposition must be strong enough to do all of them, not just one of them. Philip Anderson calls that collective phenomenon a Quantum Protectorate - see http://www.valdostamuseum.org/hamsmith/QuanCon2.html#quantumprotectorate
On the other hand, Penrose Orch OR collapse-decoherence is based on the time-energy uncertainty principle h = T E which gives a time T = h / E at which consciousness-collapse-decoherence takes place. The details of the actual calculation in my model are too long for e-mail but can be found at http://www.valdostamuseum.org/hamsmith/QuanCon.html#orchor and all the material on that page. Roughly, the resulting decoherence time T_N for N tubulins is T_N = N^(-5/3) 10^26 sec For example, for 4 x 10^15 tubulins (far from all 10^18 in the brain, but still a large number) the time is about 100 milliseconds which is roughly the EEG alpha frequency and for 10^17 tubulins the time is about half a millisecond.
So, the more tubulins you have the more protected they are by the Quantum Protectorate, but the faster they collapse by Penrose Orch OR so the less time you have think your thought.
There is in my model a third relevant process, which is collapse due to the quantum fluctuations of the universe at large which I call GRW decoherence. My view of GRW itself is described at http://www.valdostamuseum.org/hamsmith/GRW.html and the relationship between GRW and Orch OR decoherence is shown at http://www.valdostamuseum.org/hamsmith/QuanCon.html#timegraph The equation for GRW decoherence time is roughly T_N = ( 1 / N ) x 3 x 10^14 sec which in the chart on the link immediately above is compared with the Orch OR decoherence time of about T_N = ( 1 / N^(5/3)) x 10^26 sec
There is still a fourth relevant process that limits the size of a single brain (i.e., limits the number of tubulins N), which is based on Paola Zizzi's quantum decoherence model, which I like and so include in my model. It gives the upper limit of about N = 2^64 or roughly 10^19. For details see http://www.valdostamuseum.org/hamsmith/QuanCon3.html#quinflation
So, there are four relevant processes in my model:
For less than about 10^15 Tubulins or so, GRW decoheres the superposition BEFORE the Orch OR decoherence takes place, so you must have AT LEAST 10^15 Tubulins in order to have consciousness based on Penrose-type Orch OR.
Given the human brain size limit of about 10^18 tubulins, the fastest that humans can think would be about 10^(-5) seconds.
As the Zizzi upper limit is about N = 10^19, the human brain has evolved to be almost as smart as a single brain can get. How could humanity get smarter? Maybe by cooperating more and fighting less. Maybe by having multiple brains (dolphins have 2). Maybe as in the movie Matrix or the Star Trek Borg by being forced involuntarily to cooperate. Of course, there is always the possibility that humanity might just stay the same, and be superseded by something else.
As to relevant experimental tests, some might be:
As to what I think of the Hameroff-Penrose article, I like parts of it (after all my model is based on some of their ideas) but I do differ a bit on some points and as to some calculated numbers. For example, Segrob Siul says that "They calculate 2 x 10^11 tubulins in superposition will reach threshold in 25 msec (40 Hz)" while my calculations (using 10^6 tubulins per neuron, or 10% of all tubulins) give:
Time Number of Number of T_N Tubulins Neurons 10^(-5) sec 10^18 10^12 5 x 10^(-4) sec (2 kHz) 10^17 10^11 25 x 10^(-3) sec (40 Hz) 10^16 10^10 100 x 10^(-3) sec (EEG alpha) 4 x 10^15 4 x 10^9 500 x 10^(-3) sec (Radin/Bierman) 1.5 x 10^15 1.5 x 10^9
I don't worry about numerical differences even of 1 or 2 orders of magnitude (factors of 10 or 100) because exact details of processes are not well known, and to me if a lot of calculations are all more or less consistent in that range it indicates to me that the overall model is OK, and it is worth the effort to make more refined versions of the model.
However, I have not done much more work in this area since 2002, because the first paper that the Cornell arXiv rejected when I was blacklisted was a quantum consciousness paper, so in a futile attempt to get off the blacklist, I started writing some more clearly (to me) plain vanilla physics papers caculating things like neutrino masses and mixing angles and the conformal gravity model that explains the Pioneer effect and the WMAP observations of the ratio Dark Energy : Dark Matter : Ordinary Matter. I even thought that if I wrote out my model in terms of string theory they might let me off the blacklist, but they did not (my string model did not have conventional 1-1 supersymmetry and had a physical interpretation of strings as world-lines of point particles, and they probably disliked that). Anyhow, I realized around 2004 that writing good plain-vanilla-physics stuff would not help, but I have gotten off into writing that kind of stuff (lately some Tquark and Higgs stuff that might be seen at the LHC, and stuff about tapping into Dark Energy with Josephson Junction arrays), and I have not yet gotten around to doing more work on refining quantum consciousness models, constructing a generalized II1 Hyperfinite von Neumann factor, etc. There is not enough time for me, working by myself, to do all these things.
As to "... why both Penrose OR and GRW are needed. ... ? ", GRW and Zizzi are not needed in a bare-bones Penrose-Hameroff model to show that human consciousness is based on quantum gravity of tubulins. I include them because I think in the context of my overall Clifford algebra generalized II1 Hyperfinite von Neumann factor physics model it is natural for them to exist, and so I should take them into account. When I do take them into account:
wish I had time to work all such stuff out in detail, but probably the best I can do is a little bit about a few things in the short time that probably remains of my lifetime.
My model is based on Penrose-Hameroff because of two original and very useful ideas of theirs: coding the information using tubulins in microtubules as 2-state quantum systems; and using quantum gravity for Orch OR decoherence. I like and use some of their related ideas, such as information theory codes (I would use quantum information theory) related to patterns of tubulin states in the cylindrical microtubules. I also like the things that I added to their model:
So, the things that I like about my model that Penrose-Hameroff does not have are the graviton quantum protectorate and lower and upper bounds for Orch OR brain size, with the human brain fitting in OK.
The things I added do not contradict basic Penrose-Hameroff, they just add more stuff to it that seems to work.
Penrose's book "The Emperor's New Mind" (ENM), is indeed very interesting. However, Penrose wrote ENM around 1990, years before the development of quantum computing, quantum information theory, and quantum game theory, which began to be developed around 1995. In his older books (Emperor's New Mind and Shadows of the Mind) did mention quantum computing, but only on the level of early ideas of possibility-in-principle due to Deutsch and Feynman and he did not then seem to take into account the major advances beginning around 1995 such as Cerf and Adami quant-ph/9512022 etc. For example, in Emperor's New Mind, Penrose said about quantum computing "... So far these results are a little disappointing ... but these are early days yet. ...".
Papers like that of Cerf and Adami at quant-ph/9512022 show that quantum information theory is very much like particle physics quantum theory, and since my model has a Clifford algebra basis for particle physics and for quantum consciousness, quantum computing seems to be to be effectively what Penrose is looking for (but had not been developed when he wrote ENM).
Penrose's latest book, "The Road to Reality" (2006), does mention quantum computers, saying that they "... would make use of the vastness of the sizes of the kinds of space of wavefunctions ..." and also mentions quantum information theory, which he calls "quanglement", and about which he says "... the precise role of quanglement in ... the circumstances under which R takes over from U ... is not yet very clear, to my [Penrose's] mind. ... A more promising connection is with some of the ideas of twistor theory, and these will be examined briefly in section 33.2 ...". In section 33.2, Penrose discusses twistors, and says "... it turns out that the conformal group has an important place in twistor theory ... We shall see more explicitly what the role of this group is in the next two sections ...". Those "next two sections" 33.3 and 33.4 say "... a 15-dimensional symmetry group - the conformal group - ... is ... Apart from the two-to-one nature of the correspondence arising from ... reversibility of the generator directions, O(2,4) ... The shortest ... way to describe a ... twistor is to say that it is a ... half spinor ... for O(2,4) ...". Since my model gets gravity from that conformal group, it is fundamentally a twistor theory, and since my model is also fundamentally a Clifford algebra connected to quantum information theory, I think that my model is a concrete example of what Penrose needs to complete his program.
Tony Smith's Home Page