Advanced Flat Earth Theory

  • 775 Replies


  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #330 on: December 10, 2016, 10:57:57 AM »

Charles M. Hill has shown, via an analysis of millisecond pulsar data, that clocks on the earth have cyclic variations due to the eccentricity of the earth's orbit around the sun (in the geocentric model, the same cyclic variations are caused by the orbit of the Sun which is bounded by the two Tropics).

Hill has shown, using external pulsar timing sources, if the Earth is not in circular orbit, the local clock rate will vary as a function of the changing gravitational potential and orbital velocity.

Charles M. Hill (1995) has reported results comparing the clocks
on the earth to millisecond pulsars. This comparison
clearly reveals the source for the cyclic clock biases
described above. Specifically, in the sun’s frame, the vector
sum of the earth’s orbital velocity and the earth’s spin
velocity causes a cyclic clock rate term which integrates
into a cyclic clock bias as a function of the along track
distance from the earth’s center. (Though not addressed
here, the clocks in the GPS satellites would also suffer
cyclic clock-rate terms as a result of the vector sum of the
satellite orbit velocity with the earth’s orbit velocity.) Note
that in the sun’s frame these cyclic clock disturbances are
properly recognized and removed in the process of
determining a correct time within the sun’s barycentric
frame. Like the cyclic clock-rate error, which occurs as a
result of ignoring the sun’s gravitational potential, this
velocity product (in the sun’s frame) gives a clock rate
error that is ignored in the earth’s frame.
As Hill (1995) describes, the pulsar data reveals a diurnal
variation in the clock rate of about 300 ps s peak-to-peak.
The noon second is about 300 ps shorter (frequency
higher at noon) than the midnight second because of the
product of the earth’s orbital and spin velocities at the
equator. The term causing this clock rate variation comes
from the squaring of the vector addition of the two
velocities. It is given by:

Δf = (vevs/c)cosθ

where the ‘‘e’’ subscript designates the orbital velocity, the
‘‘s’’ subscript the spin velocity, and θ is the angle between
the earth’s orbital velocity and the earth spin velocity at
the clock. Plugging in the values gives a clock rate peak
magnitude of 153 ps s or 2.1 μs per radian of the earth
rotation rate. Clearly, the cosine term integrates to a value
of one for a single quadrant of rotation. The result directly
corresponds to the bias term given in Eq. (8 ) above.
The difference in sun’s gravitational potential causes a
clock rate term given by:

Δf = {1 - 2GM/(ra - recosŘ)c2}1/2 - {1 - 2GM/rac2}1/2

where the ‘‘a‘‘ subscript designates the orbital radius, the
‘‘e‘‘ subscript the earth radius and Ř is the angle between
the earth radius to the clock and the orbital radius of the
earth. Plugging in the values gives a clock rate peak
magnitude of 0.42 ps s (365 times smaller than the velocity
cross product term) or 2.1 μs per radian of the earth
orbital rate. The sign of this gravitational term is opposite
to that of the diurnal term. (The frequency is lower at
noon.) It causes the diurnal period of one sidereal day,
which results from Eq. (10) to become a period of one
solar day. Again, the result clearly corresponds to the bias
given in Eq. (8 ) above. In the earth’s frame, both clock rate
terms are ignored. It is by ignoring these cyclic rate terms
in the earth’s frame that the clock biases are generated,
which cause the speed of light to appear as isotropic.
The point of the above is worth emphasizing again. Clocks
external to the solar system (millisecond pulsars) can be
compared to clocks on the earth. Since clocks run at a
unique rate, the difference in the external clocks and the
earth-bound clocks can provide us with the unique
knowledge of the true clock rate of clocks on the earth. The
values obtained show that a cyclic clock rate occurs which
integrates into a cyclic clock bias. The cyclic clock rate
arises from two sources including (1) the product term of
the spin velocity combined with the orbital velocity, and
(2) the differences in the gravitational potential of the sun
at the clocks’ position compared to that at the center of the
earth. When the earth’s frame is used, it is easy to ignore
the composite velocity term because the orbital velocity is
removed. (But even though it is easy to ignore, removing it
assigns an erroneous cyclic clock rate to the clocks
according to the millisecond pulsars.) However, the
absence of the second cyclic term, due to the gradient of
the sun’s gravitational potential, cannot be explained by
SRT when the earth’s frame is used. As we saw above, two
faulty attempts have been made to explain its absence. The
millisecond pulsars testify to its presence, and it causes the
clock bias value to have a cyclic period of one year such
that the bias always remains a function of the distance in
the direction of the changing orbital velocity vector.

Using the SRT, no proper explanation for the apparently
missing effect of the sun’s gravitational potential upon the
clocks in the earth’s frame can be found.

SRT cannot explain the missing effect from the sun’s
gravitational potential and incorrectly assigns multiple
rates to the same clock in the same identical environment. (full article)


However, upon further reflection, it became
apparent that one significant complication with respect to
the two frames was not dealt with. Specifically, GPS was
compared in the two frames assuming that the earth’s
orbital velocity was constant.

What is the significance of this interim conclusion? We
have shown that, assuming the speed of light is isotropic
in the sun’s frame, the velocity of clocks on the spinning
earth will cause them to be biased by just the amount
needed to make it appear as if the speed of light is
actually isotropic on the earth.

However, the true believer in
SRT can argue that this is simply a coincidence and that it
is still the magic of SRT which automatically causes the
speed of light to be isotropic on the earth. There is no way
to refute his argument in this simplified case where we
have assumed that the direction of the orbital velocity
vector is constant. But, when the change in the orbital
velocity direction is allowed, we get an astonishing result.

By contrast, if SRT/GRT is
correct, we would expect that the clocks on earth and in
the GPS system would require an adjustment for the
effect of the sun’s differential gravitational potential.
Since clocks on earth and in the GPS system function
properly by ignoring the effect of the sun’s gravitational
we must conclude that SRT/GRT is wrong.

« Last Edit: December 10, 2016, 11:02:58 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #331 on: December 10, 2016, 10:55:30 PM »

Another observation that also clearly conflicts with the
constancy and isotropy of the velocity of light was discovered
during the implementation and calibration of
set-ups for Very Long Baseline Interferometry (VLBI)
radio astronomy observations. The resolution of optical
and radio astronomy observations can be improved by
orders of magnitude by analyzing the data recorded in
different observatories over the earth surface using interferometric
methods. The condition is that these data be
synchronous. The method consists in superposing coherently
the data recorded in different observatories with the
help of computers taking into account the instantaneous
position of the antennas etc. For the (VLBI) radio astronomy
observations clock synchronization at intercontinental
distances via the GPS achieve 0.1 ns. Nevertheless,
on testing the so synchronized clocks by confronting
them with the arrival of the wave fronts from distant
pulsars, which according to the TR may be synchronous,
it was observed that the pulsar signal reaches the foregoing
side of Earth 4.2 μs before the rear side along the
orbital motion of Earth. This discrepancy exceeds
the time resolution by more than four orders of magnitude.
Nevertheless along the transverse direction the arrival
of the pulsar signal was synchronous. This apparent
discrepancy in the GPS clock synchronization is again
raising very hot debates about the nature of space. Some
people speak of scandalous clocks that are biased
along the Earth’s orbital motion, others see in these
facts definitive prove that the velocity of light along different
directions within the solar system is not the same.

Many people believe that GR accounts for all the observed
effects caused by gravitational fields. However, in
reality GR is unable to explain an increasing number of
clear observational facts, several of them discovered recently
with the help of the GPS. For instance, GR
predicts the gravitational time dilation and the slowing of
the rate of clocks by the gravitational potential of Earth,
of the Sun, of the galaxy etc. Due to the gravitational
time dilation of the solar gravitational potential, clocks in
the GPS satellites having their orbital plane nearly parallel
to the Earth-Sun axis should undergo a 12 hour period
harmonic variation in their rate so that the difference
between the delay accumulated along the half of the orbit
closest to the Sun amounts up to about 24 ns in the time
display, which would be recovered along the half of the
orbit farthest from the Sun. Such an oscillation exceeds
the resolution of the measurements by more than two
orders of magnitude and, if present, would be very easily
observed. Nevertheless, contradicting the predictions of
GR, no sign of such oscillation is observed.
This is the
well known and so long unsolved non-midnight problem.
In fact observations show that the rate of the
atomic clocks on Earth and in the 24 GPS satellites is
ruled by only and exclusively the Earth’s gravitational
field and that effects of the solar gravitational potential
are completely absent.
Surprisingly and happily the GPS
works better than expected from the TR.

Obviously the gravitational
slowing of the atomic clocks on Earth cannot be due to
relative velocity because these clocks rest with respect to
the laboratory observer. What is immediately disturbing
here is that two completely distinct physical causes produce
identical effects, which by it alone is highly suspicious.
GR gives only a geometrical interpretation to the
gravitational time dilation. However, if motions cause
time dilation, why then does the orbital motion of Earth
suppress the time dilation caused by the solar gravitational
potential on the earthbased and GPS clocks?
in one case motion causes time dilation and in the
other case it suppresses it. This contradiction lets evident
that what causes the gravitational time dilation is not the
gravitational potential and that moreover this time dilation
cannot be caused by a scalar quantity. If the time dilation
shown by the atomic clocks within the earthbased
laboratories is not due to the gravitational potential and
cannot be due to relative velocity too then it is necessarily
due to some other cause. This impasse once more
puts in check the central idea of the TR, according to
which the relative velocity with respect to the observer is
the physical parameter that rules the effects of motions.
The above facts show that the parameter that rules the
effects of motions is not relative velocity but a velocity
of a more fundamental nature.

See also (pg. 147)

On the other hand, the time dilation effect of the solar
gravitational field on the atomic clocks orbiting with
Earth round the Sun, which is predicted by GR but not
observed, is a highly precise observation. It exceeds by
orders of magnitude the experimental precision and
hence is infinitely more reliable. If the orbital motion of
Earth round the Sun suppresses the time dilation due to
the solar gravitational field and moreover does not show
the predicted relativistic time dilation due to this orbital
motion, then it seems reasonable that a clock in a satellite
orbiting round the Earth in a direct equatorial orbit or in a
jet flying round the Earth too should give no evidence of
such a relativistic time dilation. The relativistic time dilation
alleged in both these round the world Sagnac experiments
is in clear and frontal contradiction with the
absence of such a relativistic time dilation effect in the
case of the orbiting Earth round the Sun.



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #332 on: December 11, 2016, 02:05:49 AM »

Ruderfer, Martin (1960) “First-Order Ether Drift
Experiment Using the Mössbauer Radiation,”
Physical Review Letters, Vol. 5, No. 3, Sept. 1, pp

Ruderfer, Martin (1961) “Errata—First-Order Ether
Drift Experiment Using the Mössbauer Radiation,”
Physical Review Letters, Vol. 7, No. 9, Nov. 1, p 361

in 1961, M. Ruderfer proved mathematically and experimentally, using the spinning Mossbauer effect, the FIRST NULL RESULT in ether drift theory.

"What students are not told is that the Turner & Hill experiment is a garbled version of a 1960 investigation by Ruderfer, who was seeking to discover fluctuations in gamma ray frequency which might indicate motion of an electromagnetic medium across the plane of the spinning disk, causing cyclic Doppler-type changes in the transit times of the gamma rays crossing that disk. Initially Ruderfer put it out that his results were negative for ether drift, but 14 months later he published an errata which stated that mathematical analysis had shown that if an ether wind were blowing across the plane of the spinning disk, one would expect that Doppler fluctuations in the frequency of the gamma radiation detected at the centre of the disk would be compensated by equal and opposite fluctuations in the emitted frequency of the gamma rays, caused by the effect of variations in the ether speed of the source.

What Ruderfer's experiment had stumbled on was that there could be a static electromagnetic medium at rest with respect to the rest of the universe. And it could be that any motion with respect to that medium affects the gamma ray source, and the central Mossbauer detector, by slowing down the rate of process of each by half the square of the ratio of each one's absolute ether speed to the absolute speed of propagation of light. If such were the case, it would follow (as a mathematical necessity) that irrespective of the direction and speed of ether drift of the lab, the central detector of the spinning disk would always observe a steady slowing of the gamma radiation frequency by half the square of the ratio of the spin speed of the source to the out-and-return speed of light, as measured by the detector in a reference frame which is non-rotating with respect to the fixed stars.

Ruderfer's experiment and his errata were of great significance in the history of modern physics because of their psychological impact on the ether deniers. Previously, the Michelson & Morley ether drift experiment had been successfully portrayed as 'negative' rather than 'null' because the proposed compensating factor, Fitzgerald contraction, was a theoretical construct. However, in the case of the Ruderfer experiment, the ether deniers were shocked to find that the experiment provided proof of the existence of the compensating factor in the observed frequency reduction, making it indubitably a null ether drift experiment.

Since the motion-induced frequency reduction of the gamma ray source is by a steady 'half the square of the ratio of the disk spin speed to the speed of propagation of the gamma rays', and since this is exactly the amount required to give the same result, irrespective of whether the disk is at ether rest, or is orientated edgewise (or at right angles) to a hypothetical ether drift, this constituted prima facie evidence for something for which the ether deniers have a particular fear and loathing - 'laws of nature which conspire to conceal the effect of ether drift'." (select the Ether Drift article option)

Analysis of the spinning Mossbauer experiments is a natural step toward analysis of the
slightly more complex and much larger-scale Global Positioning System (GPS). This
system constitutes a large scale near-equivalent to the spinning Mossbauer experiments.
The transit time between the satellite and ground-based receivers is routinely measured.
In addition, the atomic clocks on the satellite are carefully monitored; and high precision
corrections are provided as part of the information transmitted from the satellites.
Because the satellites and the receivers rotate at different rates (unlike the Mossbauer
experiments), a correction for the motion of the receiver during the transit time is
required. This correction is generally referred to as a Sagnac correction, since it adjusts
for anisotropy of the speed of light as far as the receiver is concerned. Why is there no
requirement for a Sagnac correction due to the earth’s orbital motion? Like the transit
time in the spinning Mossbauer experiments, any such effect would be completely
canceled by the orbital-velocity effect on the satellite clocks.

Specifically, there is substantial independent experimental evidence that clock speed always affects the clock frequency and, as the GPS system shows, the spin velocity of the earth clearly affects the clock rate. This being the case, the null result of the rotating Mössbauer experiments actually implies that an ether drift must exist or else the clock effect would not be canceled and a null result would not be present.

A GPS satellite orbiting the Earth, while at the same time the entire system is orbiting the Sun, IS A LARGE SCALE SPINNING MOSSBAUER EXPERIMENT.

Given the very fact that these GPS satellites DO NOT record the orbital Sagnac effect, means that THE HYPOTHESES OF THE RUDERFER EXPERIMENT ARE FULFILLED.

Why is there no requirement for a Sagnac correction due to the earth’s orbital motion? Like the transit time in the spinning Mossbauer experiments, any such effect would be completely canceled by the orbital-velocity effect on the satellite clocks.

However, indirectly, the counteracting effects of the transit time and clock slowing induced biases indicate that an ether drift is present. This is because there is independent evidence that clocks are slowed as a result of their speed. Thus, ether drift must exist or else the clock slowing effect would be observed.

In fact, there is other evidence that the wave-front bending and absence of the
Sagnac effect in the earth-centered frame is due to the clock-biasing effects of velocity
and that an ether drift velocity actually exists in the earth-centered frame. First, the
gradient of the solar gravitational effects upon clocks on the surface of the earth is such
that the clocks will speed up and slow down in precisely the correct way to retain the
appropriate up-wind and down-wind clock biases. Thus, the clocks must be biased or
else the solar gravitational effects would become apparent.

Second, as Charles M. Hill has shown, clocks on the earth clearly vary their rate as
the speed of the earth around the sun varies. Earth clocks run slower when the earth’s
speed increases and the earth’s distance from the sun is decreased near perihelion. The
earth’s clocks run faster near aphelion. This variation must be counteracted via an ether drift effect else it could be detected in GPS and VLBI experiments.

This is an IOP article.

The author recognizes the earth's orbital Sagnac is missing whereas the earth's rotational Sagnac is not.

He uses GPS and a link between Japan and the US to prove this.

In GPS the actual magnitude of the Sagnac correction
due to earth’s rotation depends on the positions of
satellites and receiver and a typical value is 30 m, as the
propagation time is about 0.1s and the linear speed due
to earth’s rotation is about 464 m/s at the equator. The
GPS provides an accuracy of about 10 m or better in positioning.
Thus the precision of GPS will be degraded significantly,
if the Sagnac correction due to earth’s rotation
is not taken into account. On the other hand, the orbital
motion of the earth around the sun has a linear speed of
about 30 km/s which is about 100 times that of earth’s
rotation. Thus the present high-precision GPS would be
entirely impossible if the omitted correction due to orbital
motion is really necessary.

In an intercontinental microwave link between Japan and
the USA via a geostationary satellite as relay, the influence
of earth’s rotation is also demonstrated in a high-precision
time comparison between the atomic clocks at two remote
ground stations.
In this transpacific-link experiment, a synchronization
error of as large as about 0.3 µs was observed unexpectedly.

Meanwhile, as in GPS, no effects of earth’s orbital motion
are reported in these links, although they would be
easier to observe if they are in existence.
Thereby, it is evident
that the wave propagation in GPS or the intercontinental
microwave link depends on the earth’s rotation, but
is entirely independent of earth’s orbital motion around
the sun or whatever. As a consequence, the propagation
mechanism in GPS or intercontinental link can be viewed
as classical in conjunction with an ECI frame, rather than
the ECEF or any other frame, being selected as the unique
propagation frame. In other words, the wave in GPS or the
intercontinental microwave link can be viewed as propagating
via a classical medium stationary in a geocentric
inertial frame.


"The term “Sagnac effect” is part of the vocabulary of only the observer in the rotating reference frame. The corresponding correction applied by the inertial observer might be called a “velocity correction.” While the interpretation of the correction is different in the two frames, the numerical value is the same in either frame."

Calculations performed at the NASA Goddard Space Flight Center.

Please note the theoretical orbital sagnac shows up in these calculations, but is not picked up/registered/recorded by GPS satellites.

The effect of an ether drift on the GPS one-way range measurements is exactly
counteracted by the effect of the ether drift on the receiver clocks.

As such, modern science is making a last stand in order to explain the GPS/Ruderfer/Sagnac effects/experiments: a local ether theory named MLET (Modified Lorentz Ether Theory).

MLET (Modified Lorentz Ether Theory) is based on the Lorentz transformation (Lorentz factor/contraction), and thus, is equally invalid.

The colossal mistakes committed by Lorentz and Einstein in deriving the Lorentz transformation/factor:

Dr. Hans Zweig, Stanford University: (missing orbital motion Sagnac effect)

« Last Edit: August 14, 2019, 06:23:26 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #333 on: December 11, 2016, 10:22:25 AM »

"For a circular path of radius R, the difference between the different time intervals can also be represented as Δt = 2vl/c2, where v = ΩR is the speed of the circular motion and l = 2πR is the circumference of the circle.

The travel-time difference of two counterpropagating light beams in moving fiber is proportional to both the total length and the speed of the fiber, regardless of whether the motion is circular or uniform.

In a segment of uniformly moving fiber with a speed of v and a length of Δl, the travel-time difference is 2vΔl/c2."

The Sagnac effect also applies to straight line motion: (Sagnac effect)

Now, we can easily understand how the path of the radiation from the GPS satellite to the receiver clearly follows a straight line and the instantaneous velocity of the receiver (ether strings rotation) is not affected significantly by the radial acceleration during the instant of reception.

The rotation of the telluric currents/ether strings/wind/drift above the surface of the flat Earth affects the light signal: the difference in time is the Sagnac effect.

Dr. Dayton Miller ether drift results: "The measurements were latitude-dependent as well."

Dr. Yuri Galaev ether drift results:


journal pgs 211-225

The expression (28) enables by the results [5, 6], obtained at the altitude ZM, to calculate high-altitude speed relation of the ethereal wind WM (Z) at the latitude φM. (pg. 217)

On page 218 we can find the complete formula (28), (30) (see also (35) and page 223).

"Basically, a laser is fired through a half-silvered mirror. Half the light goes one way around a large coiled loop, and half the light goes the other way. They are then recombined and interfered. Due to rotation, after the light goes through in either direction, the wavelength of light will be slightly shifted for each direction. This will establish a beat frequency on the detector which is proportional to the angular velocity of the ring laser gyroscope."

"Ring lasers are inertial rotation sensors using the Sagnac effect, which is the frequency splitting of two counter-rotating laser beams due to rotation (Sagnac 1913)."

A ring laser gyroscope will record the effect of the ether drift/strings upon the laser beams: the Sagnac effect.

« Last Edit: December 12, 2016, 03:40:29 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #334 on: December 15, 2016, 03:20:26 AM »

The optical whirlwind effect of an artificial rotation of an overall system really shows itself, without unexpected compensation, as an effect of the first order of the movement in comparison with the ether.  The experience directly reveals […] the linear delay […] that the overall rotation of the optical system produces in the ether between the two systems of inverse waves T and R during their propagation around the circuit.

G. Sagnac

"By applying Stokes’ rule the derivation of Sagnac effect can be changed from an
integration over a surface to an integration along a line, which is correct in relation to
where the light really is. This demonstrates that Sagnac effect is translational, and that an ether-wind has been detected by Sagnac and by the GPS system.

Since Sagnac effect is an effect in light that is enclosed inside an optical fiber we can
conclude that Sagnac effect is distributed along a line and not over an area. No light and
no rotation exists in the enclosed area. Sagnac detected therefore an effect of translation
although he had to rotate the equipment to produce the effect inside the fiber. By this
solution Sagnac found a method to circumvent Einstein's clock synchronization problem. The Sagnac effect is distributed in every small part of the line.

The fact that Sagnac effect is caused by translation means that the same effect as in a rotating circle also must exist in a translating straight line.

The most important error regarding Sagnac effect is the classification of the effect as rotational. The effect is translational since it is distributed along a line. This means that the same effect must exist along a straight line. In the global positioning system (GPS) a compensation for this translational effect is done.  The high precision in the GPS system demands this correction, when time stations on our planet are compared.

The GPS system cannot afford to ignore the ether wind."

"If the Sagnac effect can be produced in linear uniform motion, then the claim that it
is a characteristic of radial motion is simply incorrect. Because the rules of SR apply to linear uniform motion, the only conclusion is that SR is incorrect."

"The underlying experiment by Sagnac was first performed almost a century ago in 1913.  The effect is routinely employed today in fiber optic gyroscopes which measures very minute changes in angular orientation.

There are multiple claims in the literature attempting to use either the special relativity theory (SRT) or the general theory of relativity (GRT) to explain the effect. The conclusion in virtually all of the explanations is that it is a rotational effect.  As far as I am aware, the earliest claim that the Sagnac Effect was not a rotational effect was by Ives. Ives was a pioneer in the development of television at Bell Telephone Laboratories.  The following quotations are from his 1938 article.
     The experiment was interpreted by its author as positive evidence for the existence of the luminiferous ether…

     It is the purpose of this paper first to show that the Sagnac experiment in its essentials involves no consideration of rotation, and second to investigate the results obtained when transported clocks are used.
Ives analyzed the Sagnac experiment using a hexagonal path rather than a circular one.
He concluded with this statement:
     The net result of this study appears to be to leave the argument of Sagnac as to the significance of his experiment as strong as it ever was.

The Sagnac effect is not due to rotation, but instead is a linear effect due to a true anisotropic light speed in a moving frame.

In 1938 Ives showed by analysis that the measured Sagnac effect would be unchanged if the Sagnac phase detector were moved along a cord of a hexagon-shaped light path rather than rotating the entire structure. Thus, he showed the effect could be induced without rotation or acceleration."

"Something was affecting the light in order for it to consistently produce the fringe displacement. Sagnac (1913) demonstrated it was ether.

The GPS satellites must be pre-programmed with the Sagnac correction in order to work properly. This is a fact that is generally hidden from the public."

« Last Edit: December 16, 2016, 12:44:55 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #335 on: December 16, 2016, 12:42:28 AM »

Further proofs that the Sagnac effect applies to uniform/linear/translational motion.

The other question one might ask is at what level curvature is important--if it is circular motion which causes the Sagnac effect as Ashby claims, how much does the path have to deviate from a straight line to cause the effect? At Los Angeles the earth rotates about 27 meters during the nominal 70 millisecond transit time of the signal from satellite to receiver. The deviation of the 27 meter movement from the straight line chord distance is only 35 microns at its largest point. It certainly seems incredible that a 35 micron deviation from a straight line could induce a 27 meter change in the measured range.

As a final proof that it is movement of the receiver which is significant--not whether that movement is in a curved or straight line path--a test was run using the highly precise differential carrier phase solution. The reference site was stationary on the earth and assumed to properly apply the Sagnac effect. However, at the remote site the antenna was moved up and down 32 centimeters (at Los Angeles) over an eight second interval. The result of the height movement was that the remote receiver followed a straight line path with respect to the center of the earth.

The Sagnac effect was still applied at the remote receiver. The result was solved for position that simply moved up and down in height the 32 centimeters with rms residuals
which were unchanged (i.e. a few millimeters). If a straight line path did not need the Sagnac adjustment to the ranges the rms residuals should have increased to multiple meters. This shows again that it is any motion--not just circular motion which causes the Sagnac effect.

(Conducting a Crucial Experiment of the Constancy of the Speed of Light Using GPS, R. Wang/R. Hatch)

In the Sagnac experiment, an ether wind must exist due to its propagation above the flat surface of the Earth which leads to the observed time difference.

"Sagnac detected the first-order effect of a man-made ether wind by using light following
a closed path in a rotating apparatus. In relation to his equipment, light traveled at
different speeds in two opposite directions."

In light of these results, mainstream science has resorted to modifying the speed of light
using two approaches: the Modified Lorentz Ether Theory and Non Time Orthogonal Analysis. However, both of these hypotheses are based either on the Lorentz transformation or on the Minkowski metric/spacetime, and as such are totally in error.

The Sagnac effect demonstrates that electromagnetic beams traveling in opposite directions will not travel at the same speed.

"So what is making one of the light  beams travel slower? Sagnac said it was due to the ether impeding its  velocity - a resistance that is easily generated by rotating the table. So  predictable and precise are these results that the “Sagnac effect,” as it is  commonly called, is used routinely in today’s technology for the purpose of sensing rotation, as well as in mechanical gyroscopes."



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #336 on: December 17, 2016, 12:40:46 AM »

In 1913, G. Sagnac proved that the speed of light is variable (Sagnac effect).

This fact is in total agreement with the original equations published by J.C. Maxwell in 1861:

Einstein, 1905:

"The principle of the constancy of the velocity of light is of course contained in Maxwell's equations”

We can infer immediately that Einstein had no knowledge whatsoever of the original ether equations derived by Maxwell, and based his false/erroneous conclusions on the MODIFIED/CENSORED Heaviside-Lorentz equations.

In the original ether equations, the speed of light is variable.

"The average speed of this flow is what determines the speed of light."

Aether sinks = dextrorotatory magnetic monopoles/subquarks

Aether sources = laevorotatory magnetic monopoles/subquarks

"Einstein claims that “The principle of the constancy of the velocityof light is of course contained in Maxwell's equations”.

If the Lorentz force had still been included as one of Maxwell’s equations, they could
have been written in total time derivative format (see Appendix A in ‘The Double
Helix Theory of the Magnetic Field’) and Einstein would not have been able to make
this claim. A total time derivative electromagnetic wave equation would allow the
electromagnetic wave speed to alter from the perspective of a moving observer."

"And even more interesting still is the fact that Maxwell's original equation (D) is introduced in modern textbooks, under the misnomer of „The Lorentz Force‟, as being something extra that is lacking in Maxwell's equations, and which is needed as an extra equation to compliment Maxwell's equations, in order to make the set complete, as if it had never been one of Maxwell's equations in the first place! Maxwell in fact derived the so-called Lorentz force when Lorentz was only eight years old. Using the name „The Lorentz Force‟ in modern textbooks for equation (D) is somewhat regrettable, in that it gives the false impression that the μv×H expression is something that arises as a consequence of doing a „Lorentz transformation‟. A Lorentz transformation is an unfortunate product of Hendrik Lorentz‟s misunderstandings regarding the subject of electromagnetism, and these misunderstandings led to even greater misunderstandings when Albert Einstein got unto the job. Neither Lorentz nor Einstein seemed to have been aware of the contents of Maxwell's original papers, while both of them seemed to be under the impression that they were fixing something that wasn't broken in the first place. In doing so, Einstein managed to drop the luminiferous aether out of physics altogether, claiming that he was basing his investigation on what he had read in the so-called „Maxwell-Hertz equations for empty space‟! But whatever these Maxwell-Hertz equations might have been, they certainly can't have been Maxwell's original equations. This is a tragic story of confusion heaped upon more confusion. The aether was a crucial aspect in the development of Maxwell's equations, yet in 1905, Albert Einstein managed to impose Galileo's „Principle of Equivalence‟ upon Maxwell's equations while ignoring the aether altogether. The result was the abominable product which is hailed by modern physicists and known as „The Special Theory of Relativity‟. " (magnetic monopoles and the original set of Maxwell's equations)

« Last Edit: June 30, 2017, 09:07:50 PM by sandokhan »



  • Administrator
  • 17694
  • President of The Flat Earth Society
Re: Advanced Flat Earth Theory
« Reply #337 on: December 17, 2016, 03:22:39 AM »
I am your biggest fan, but you have been off base lately; you had a solid argument concerning certain views, but I feel like you aren't holding up the mustard lately. Bring back the math.
The illusion is shattered if we ask what goes on behind the scenes.



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #338 on: December 17, 2016, 04:02:09 AM »
Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.

N. Tesla



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #339 on: December 18, 2016, 12:27:57 AM »

In the Sagnac experiment, the light speed varied to c + ωr in one direction and c – ωr in the other direction.

"A solution to the original/corrected Maxwell equations indicates that these equations are invariant under the Galilean transformation. Velocity vectors are additive, which means that the speed of light can be exceeded."

"Maxwell’s [modified] equations are a brilliant formulation of the laws of electromagnetism. However, they were derived for static systems, i.e.; where there was no motion relative to the relevant coordinate system (RCS). At the turn of the twentieth century some scientists assumed that these equations pertain also to dynamic systems, wherefrom it follows that the speed of light is constant in all inertial coordinate systems. This in turn led to the Lorentz transformation and to Einstein’s theory of relativity.

The complete set of the EM (corrected Maxwell) equations is presented in chapter 1. It is shown that the notion of the speed of light being constant in all inertial coordinate systems stems from the wrong application of Maxwell's [modified] equations to dynamic systems. It is also pointed out that due to terms restored to the corrected Maxwell equations they do not equate under the Lorentz transformation rendering it, along with the theory of relativity which is based on this transformation, invalid.

A solution to the original/corrected Maxwell equations indicates that these equations are invariant under the Galilean transformation.

Consequently velocity vectors are additive, which means that the speed of light can be exceeded.

The common representation of Maxwell’s [modified] equations is valid only for static systems.

The physicists at the turn of the twentieth century were unaware of this limitation. They assumed that Maxwell’s [modified] equations were universally valid (i.e.: applicable to any inertial coordinate system) and tried to apply them to dynamic systems which led to inconsistencies. But instead of realizing and correcting the error (by modifying Maxwell’s equations; [i.e., using the original ether equations published by Maxwell in 1861) they introduced the Lorentz transformation which was the foundation of the flawed theory of relativity."

J.C. Maxwell used a dynamical model to derive his famous equations. (section 8, pages 8-9, exposes one of the deceptions used to hide the original form and meaning of Maxwell's aether equations)

"Maxwell was working on the basis that his equations apply in the rest frame of the luminiferous medium. The speed of light would therefore be measured relative to that medium. When observed from any frame of reference that is in motion relative to the luminiferous medium, the speed of light would simply be measured as per Galilean vector addition of velocities. Einstein wrongly preached that Maxwell’s equations are independent of any particular frame of reference and that hence the speed of light would always be observed to have the same value no matter from which frame of reference it is observed. Einstein had absolutely no basis upon which to draw this absurd conclusion and it is his abandoning of Galilean vector addition of velocities in relation to the speed of light that lies at the cornerstone of his special theory of relativity, and as such it is fair to state that on this basis alone Einstein’s theories of relativity are completely and totally false."

E = vXB and Maxwell’s Fourth Equation (how the convective component of Maxwell's original equations was carefully removed/censored in order to facilitate the introduction of the false Lorentz transformation/relativistic effects)

"Einstein claims that “The principle of the constancy of the velocityof light is of course contained in Maxwell's equations”.

If the Lorentz force had still been included as one of Maxwell’s equations, they could
have been written in total time derivative format (see Appendix A in ‘The Double
Helix Theory of the Magnetic Field’) and Einstein would not have been able to make
this claim. A total time derivative electromagnetic wave equation would allow the
electromagnetic wave speed to alter from the perspective of a moving observer."

(Conducting a Crucial Experiment of the Constancy of the Speed of Light Using GPS, R. Wang/R. Hatch) pages 499-500, 503 of the paper (pgs 5-6, 9 in the pdf document)



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #340 on: December 21, 2016, 12:05:13 AM »

When light is emitted in opposite directions on a platform in motion, it does NOT travel at the same speed.

The Sagnac effect proves the existence of the ether, the absolute reference frame of the universe.

The Sagnac effect is a direct consequence of the physical context expressed by J.C. Maxwell's original set of ether equations.

The Significance of Maxwell’s Equations

"Modern physics denies the existence of such a medium just as it denies the existence of centrifugal force as a physical reality, even though centrifugal force is the most significant factor behind the electromagnetic wave propagation mechanism within this medium. Maxwell’s real achievements are in fact totally alien to modern physics and indeed he was not remotely working along the same lines as Einstein.

In the year 1856, Weber and Kohlrausch performed an experiment with a Leyden jar and established the ratio between a quantity of electricity measured statically to the same quantity of electricity measured electrodynamically. This ratio turned out to be numerically related to the speed of light.

Maxwell showed that the ratio in question could be used in Newton’s equation for the speed of a wave in an elastic solid, hence confirming that light is an elastic wave in a particulate solid.

Einstein’s entire basis for postulating the constancy of the speed of light lay with the misinformed view that Maxwell’s equations do not contain a convective term. It is in this respect in particular that Maxwell’s contribution to electromagnetism has been totally distorted. Maxwell and Einstein were not remotely working along the same lines, while Maxwell was quite clear about the fact that the speed of light is measured relative to an elastic solid (comprised of fluid vortices), and that it is most certainly not frame independent as is believed by relativists." (Weber-Kohlrausch Experiment)

"In 1856, Weber and Kohlrausch linked optical phenomena to electromagnetism. They discharged a Leyden jar and linked the speed of light to the electrostatic/electrodynamic ratio. In 1861, Maxwell showed how this ratio can be inserted into Newton’s equation for the speed of a wave in an elastic solid, and hence he showed that light is an electromagnetic wave. Modern textbooks unfortunately approach this issue with the benefit of hindsight while eliminating the rationale behind it by eliminating the sea of aethereal vortices along with its associated density and transverse elasticity. And to make matters worse, modern textbooks tend to focus on the mathematical solutions to Maxwell’s equations rather than on the physical meaning behind them." (The Speed of Light varies with Magnetic Flux Density)



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #341 on: December 22, 2016, 06:05:11 AM »
MARCH 20, 1662 AD

Each and every controversy/contradiction of science, religion, history could have been well settled on the date of March 20, 1662.

There were to be no more future discussions/debates on vacuum vs. ether, theory of evolution vs. creationism, spherical earth vs. flat surface of the earth, heliocentricity vs. geocentricity: each and every dispute should have been resolved in totality on that very date.

March 20, 1662, represents by far the most important astronomical date in the history of scientific observations, of science in general, of astrophysics, of religion.

Because on that very date, right on the day of the vernal equinox, a total solar eclipse occurred.

And yet, in the official chronology of history, with the exception of a very obscure reference, NONE of the famous astronomers of the day could have cared less about this remarkable celestial phenomenon.

The Jesuits in India/China, F. Verbiest, J. Schall von Bell,  even the young N. Flamsteed fail to notice/record this most important of all the total solar eclipses.

It is only in a very brief mention by Domenico Cassini, that this solar eclipse is even recorded at all.

D. Cassini, Ephemerides nouisssimae motuum coelestium: (pg. 28, 29, 34, 35)

What should have been by far the most important astronomical event of the millenium, of all time, a chance to settle once and for all the Gregorian calendar reform controversy, the raging debate on heliocentricity vs. geocentricity, aroused no interest at all from the scientific community at that point in time.

And yet, the registered date for the total solar eclipse which occurred in early 1662, March 20 (right on the vernal equinox) cannot be true.

The Gregorian calendar was developed in the later part of the 16th century,
mainly by Aloysius Lilius and Christophorus Clavius. It was named after
Pope Gregory XIII who decreed its implementation in 1582. By that time
the Julian calendar had run out of step with the astronomical data in two
ways. In its solar part, it had accumulated an error of ten days; the true
average vernal equinox fell on March 11 rather than March 21 as the calendar
assumed. This was corrected by omitting the ten calendar days October 5
through October 14, 1582.

Papal Bull, Gregory XIII, 1582:

Therefore we took care not only that the vernal equinox returns on its former date, of which it has already deviated approximately ten days since the Nicene Council, and so that the fourteenth day of the Paschal moon is given its rightful place, from which it is now distant four days and more, but also that there is founded a methodical and rational system which ensures, in the future, that the equinox and the fourteenth day of the moon do not move from their appropriate positions.

According to the official chronology and astronomy, the direction of Earth's rotation axis executes a slow precession with a period of approximately 26,000 years.

Therefore, in the year 325 e.n., official date for the Council of Nicaea, the winter solstice MUST HAVE FALLEN on December 21 or December 22; in the year 968 e.n., on December 16; and in the year 1582, on December 11.

We are told that the motivation for the Gregorian reform was that the Julian calendar assumes that the time between vernal equinoxes is 365.25 days, when in fact it is about 11 minutes less. The accumulated error between these values was about 10 days (starting from the Council of Nicaea) when the reform was made, resulting in the equinox occurring on March 11 and moving steadily earlier in the calendar, also by the 16th century AD the winter solstice fell around December 11.

Domenico Cassini, in the official chronology of history, agrees wholeheartedly with the Gregorian calendar reform, and even defends it vigorously.

But it was Cassini who installed the most accurate observatory at San Petronio, and made ample use of it to monitor the accuracy of the new calendar. Cassini’s observations allowed exact calculations of future equinoxes on the Gregorian calendar to be made in advance.

If the date of the total solar eclipse of the spring in the year 1662 AD really had fallen on March 20, the very day of the vernal equinox, then it would have constituted a perfect, total astronomical verification of the Gregorian calendar reform; also a valid proof that the chronology of history, from Hipparchus to Ptolemy and from Exiguus to Clavius, based on the axial precession of the Earth, was correct (thus resolving once and for all the heliocentricity vs. geocentricity debate).

No RE axial precession means that the Earth did not ever orbit around the Sun, as we have been led to believe. And it means that the entire chronology of the official history has been forged at least after 1750 AD. (the vernal equinox in the year 1582 AD must have fallen on March 16)

The falsification of the Gregorian calendar reform:



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #342 on: December 26, 2016, 02:46:04 AM »

“Maxwell had actually written a unified field theory of electromagnetics and gravitation - not just the unification of electricity and magnetism as is commonly believed. Further, this can readily be shown by examining some significant even startling - elementary differences between quaternion mathematics and the present vector mathematics.

Let us briefly look at one of these key differences, to show that the present vector mathematics expression of Maxwell's theory is only a subset of his quaternion theory.

What Heaviside's theory specifically omitted was electrogravitation (KG) - the ability to transform electromagnetic force field energy into gravitational potential energy, and vice-versa. And that has been omitted because of the assumptions of the vector theory in the nature of: (1) EM vector field combination, and (2) a zero-vector resultant of the interaction of multiple nonzero EM force vectors.

Briefly, in Heaviside's vector mathematics, the abstract vector space in which the vectors exist has no stress nor consequent "curvature" in it. That is, the mathematical vector space does not change due to interactions between the vectors it "contains". This, of course is not necessarily true in the "real space" of the physical world. Thus when such an abstract vector space and its concomitant coordinate system are taken to model physical space (physical reality), the model will be valid only when the physical space itself has no appreciable local curvature, and is in a state of total equilibrium with respect to its interactions with observable charged particles and masses.

So abstract vector theory implicitly assumes "no locked-in stress energy of the vector space itself". By assumption, the only interactions are between the objects (the vectors) placed in/on that space. Therefore, when two or more translation vectors sum or multiply locally to a zero-vector translation resultant, in such an "unstressable" vector space one is justified in: (1) replacing the system of summing/multiplying translation vector components with a zero-vector, and (2) discarding the previous translation vector components of the zero-vector system. That is, one may properly equate the translation zero-vector system with a zero-vector, since the presence or absence of the combined vectors can have no further action. Specifically, axiomatically they exert no stress on the abstract vector space. Under those assumptions, the system can be replaced by its equivalent zero-factor alone.

Note that, applied to electromagnetics, this modeling procedure eliminates any theoretical possibility of electrogravitation (EM stress curvature of local ether/aether continuum) a priori.

Force Vectors are Translations of Stress

Conceptually, a force vector is actually a release of some implied stress in a local medium. The force is applied to create stress in a second local region immediately adjacent to the primary region of stress. Of course the stress being thus "translated" by the force vector may be either tensile or compressive in nature, but a priori the force vector always represents the translation of that stress from its tail-end toward its head.
Consequently, an EM force vector is a gradient (inflow or outflow) in a scalar EM potential (stress), where the referent potential stress may be either tensile or compressive. Since modern Heaviside-type vectors do not distinguish between, or even recognize, the two "head and tail" scalar EM potentials involved in a vector, one needs to refer to Whittaker  to get it right. Whittaker, a fine mathematician in his own right, showed that any vector field can be replaced by two scalar waves. Unfortunately, the electrogravitational implications of Whittaker's profound work were not recognized and followed up, and their connection to Maxwell's quaternionic EM theory was not noticed nor examined.

So the idea of a vector EM force represents a release of a primary "tail-associated" scalar potential, and a bleedoff of that potential. It represents an increase in its primary "head-associated" scalar potential, and a bleed into that potential. Each scalar potential itself represents trapped EM energy density in the local vacuum, in the form of two or more (even an infinite number of) internal (infolded) EM force vector components (which may be either fixed, dynamic, or a blend of the two). The trapped energy density, however, may be either positive or negative with respect to the local energy density of the standard ambient vacuum, since the potential may be either compressive or tensile.

So in 1988, we have finally arrived at the state where the potentials are more-or-less understood by a consensus of quantum physicists as being the primary EM reality, while the force fields are now seen to be secondary effects generated from the potentials.
This understanding, however, still has a long way to go before it penetrates the main bastions of physics and electrical engineering. Most scientists and electrical engineers are still adamantly committed to the Heaviside version of Maxwell's theory, and are strongly conditioned that the EM force fields are the primary effectors in electromagnetics.

They are also nearly totally resistant to the idea that there may be a fundamental error in automatically replacing a zero-resultant system of EM translation force vectors with a zero factor, rather than replacing the system with the combination of a conditional zero vector (conditional for translation only) and a scalar stress potential. Consequently, most orthodox scientists and engineers are still strongly conditioned against quaternions, and erroneously believe that Heaviside's translation was complete. Seemingly it has never occurred to most mathematicians and scientists that zero-vectors are usually not truly equal. Stress-wise, zero resultant combinant systems of multiple translation vectors usually differ in: (1) magnitude, (2) polarization, (3) type of stress, (4) frequency components, (5) nonlinear components, and (6) dynamic internal variation.

In the vector product of two identical vectors,

we will obtain,

where A is the length (magnitude) of vector, and the angle is zero.

After Heaviside and Gibbs, electrical engineers are trained to replace the cross product with the zero vector, discarding the components of the zero vector system as having no further consequences, either electromagnetically or physically.

Now let us look at the comparable quaternion expression of this situation. First, in addition to the three vector components, a quaternion also has a scalar component, w.

So the quaternion q corresponding to vector is:

The physical interpretation of this equation is that there locally exists a stress w in the medium and a translation change vector in that stress.

When the quaternion is multiplied times itself (that is, times an identical quaternion), the vector part zeros, just as it did for the vector expression. However, the scalar part does not go to zero. Instead, we have:

The zero translation vector resultant of the system shows that the system now does not produce translation of a charged particle. Because the force vectors have been infolded, the scalar term shows that the system is stressing, and the magnitude of that stress is given by the scalar term A2.

Notice that the zero vector in the equation does not represent the absence of translation vectors, but it represents the presence of a system of multiple (in this case, two) vectors, one of them acting upon the other in such a manner that their external translation effect has been lost and only their stress effect remains. The quaternion scalar expression has, in fact, captured the local stress due to the forces acting one on the other, so to speak. It is focused on the local stress, and the abstract vector space, adding a higher dimension to it.

In other words, the zero translation vector resultant in the equation represents the internal stress action of a nontranslating system of vectors that are present, infolded, and acting internally together on the common medium that entraps them and locks them together. The two translation vectors have formed a deterministically substructured medium-stressing system, and this is a local gravitational effect.

One sees that, if we would capture gravitation in a vector mathematics theory of EM, we must again restore the scalar term and convert the vector to a quaternion, so that one captures the quaternionically infolded stresses. These infolded stresses actually represent curvature effects in the abstract vector space itself. Changing to quaternions changes the abstract vector space, adding higher dimensions to it.

In the equation, then, we have a local gravitational effect - a local increase in the energy density of a vacuum.

We have presented only the barest illustration of how Maxwell's original quaternion theory was actually a unified field theory of electrogravitation, where gravitation deals with the stress (enfolded and trapped forces) of the medium, and electrogravitation deals with the electromagnetic stress (enfolded and trapped EM forces) of the medium.

Recapitulation: From Maxwell to 1900

In summary, Maxwell himself was well-aware of the importance and reality of the potential stress of the medium. However, after Maxwell's death, Heaviside - together with Hertz - was responsible for striving to strip away the electromagnetic potentials from Maxwell's theory, and for strongly conditioning physicists and electrical engineers that the potentials were only mathematical conveniences and had no physical reality. Heaviside also discarded the scalar component of the quaternion, and - together with Gibbs - finalized the present modern vector analysis.

The scalar component of the quaternion, however, was the term which precisely captured the electrogravitational stress of the medium. By discarding this term, Heaviside (aided by Hertz and Gibbs) actually discarded electrogravitation, and the unified EM-G field aspects of Maxwell's theory. However, the theory and the calculations were greatly simplified in so doing, and this excision of electrogravitation provided a theory that was much more easily grasped and applied by scientists and engineers - even though they were now working in a subset of Maxwell's theory in which gravity and EM remained mutually exclusive and did not interact with each other.

Shortly before 1900, the vectorists' view prevailed, and the Heaviside version of Maxwell's theory became the established and universal "EM theory" taught in all major universities - and erroneously taught as "Maxwell's theory"! Though gravitation had been removed, the beautiful unification of the electrical and magnetic fields had been retained, and so the rise in applied and theoretical electromagnetics and electromagnetic devices began, ushering in the modern age."

(Maxwell's lost unified field theory of electromagnetics and gravitation, T. Bearden)

" ... In discarding the scalar component of the quaternion, Heaviside and Gibbs unwittingly discarded the unified EM/G [electromagnetic/ gravitational] portion of Maxwell's theory that arises when the translation/directional components of two interacting quaternions reduce to zero, but the scalar resultant remains and infolds a deterministic, dynamic structure that is a function of oppositive directional/translational components. In the infolding of EM energy inside a scalar potential, a structured scalar potential results, almost precisely as later shown by Whittaker but unnoticed by the scientific community. The simple vector equations produced by Heaviside and Gibbs captured only that subset of Maxwell's theory where EM and gravitation are mutually exclusive. In that subset, electromagnetic circuits and equipment will not ever, and cannot ever, produce gravitational or inertial effects in materials and equipment.

It is to be noted that Kaluza combined electromagnetics and gravitation as a unified theory in 1921.  Kaluza added a fifth (spatial) dimension to Minkowski's 4-space, and applied Einstein's relativity theory to 5 dimensions.

To Kaluza's delight, a common 5-d potential is responsible for both electromagnetic field and gravitational field.  The "bleed-off' of this 5-potential in the 5th dimension (which is wrapped around each point in our 3-space) is what we know as the electromagnetic force field.  The bleed-off of this 5-potential in and through our 3-space is what we know as the gravitational force field."

Now we know that there is no such thing as the Minkowski spacetime continuum, and no other higher dimensions. (total demolition of STR/GTR)

Ether theory means that there are more infinitesimal levels of strings of bosons, which form each and every known particle in quantum physics: baryons (9 subquarks structure), mesons (6 subquarks structure), quarks (3 subquarks structure) and the subquarks themselves (gravitons/magnetic monopoles).

In ether theory, ELECTRICITY = MAGNETISM = TERRESTRIAL GRAVITY: subquarks = gravitons = magnetic monopoles.

« Last Edit: December 26, 2016, 03:39:50 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #343 on: December 26, 2016, 08:17:29 AM »

"In the 1880s, several scientists --- Heaviside, Gibbs, Hertz etc. --- strongly assaulted the Maxwellian theory and dramatically reduced it, creating vector algebra in the process. Then circa 1892 Lorentz arbitrarily symmetrized the already seriously constrained Heaviside-Maxwell equations, just to get simpler equations easier to solve algebraically, and thus to dramatically reduce the need for numerical methods (which were a "real bear" before the computer). But that symmetrization also arbitrarily discarded all asymmetrical Maxwellian systems - the very ones of interest to us today if we are seriously interested in usable EM energy from the vacuum.

So anyone seriously interested in potential systems that accept and use additional EM energy from the vacuum, must first violate the Lorentz symmetry condition, else all his efforts are doomed to failure a priori.

We point out that quaternion algebra has a higher group symmetry than either vector algebra or tensor algebra, and hence it reveals much more EM phenomenology and dynamics than does EM in vector or tensor form.

Today, the tremendously crippled Maxwell-Heaviside equations --- symmetrized by Lorentz --- are taught in all our universities in the electrical engineering (EE) department. Note that the EE professors still dutifully symmetrize the equations, following Lorentz, and thus they continue to arbitrarily discard all asymmetrical Maxwellian systems. Hence none of them has the foggiest notion of how to go about developing an "energy from the vacuum" system, which is asymmetrical a priori.

The resulting classical electromagnetics and electrical engineering (CEM/EE) model taught in all our university EE departments also contains very serious falsities. Most of modern physics, such as special and general relativity, quantum field theory, etc., has been developed since the 1880s and 1890s fixating of the symmetrized Maxwell-Heaviside equations.

When Lorentz symmetrically regauged the Maxwell-Heaviside equations, he arbitrarily discarded the entire class of Maxwellian systems that are far from equilibrium in their exchange with their active (vacuum) environment. Lorentz revised (symmetrically regauged) the Maxwell-Heaviside equations to make them amenable to separation of variables and closed analytical solutions.

The domain of Lorentz's symmetrically regauged equations is only a small subset of the domain of the Maxwell-Heaviside equations they replace. Indeed, the later Lorentz symmetrical regauging discards an entire class of Maxwellian systems permitted by nature and by the Maxwell-Heaviside equations before they are symmetrically regauged."

"From the beginning, Poynting only considered that component of the energy flow that actually enters the circuit. He considered only the "boundary layer" right on the conductor surfaces, so to speak.

Heaviside considered that component that enters the circuit, and also uncovered and recognized the gigantic component in the surrounding space that does not enter the circuit but misses it entirely and is wasted.

Heaviside himself recognized the gravitational implications of his extra component of energy flow, which is in closed circular loops. Beneath the floorboards of his little garret apartment, years after his death, handwritten papers were found where Heaviside used this component for a unified EM approach to gravitation.

Heaviside's theory was an extension of what Poynting had considered, and Heaviside also corrected Poynting as to the direction of flow. Heaviside was fully aware of the enormity of the "dark energy" flow missed by Poynting, but had absolutely no explanation as to where such a startlingly large EM energy flow — pouring from the terminals of every dipole, generator, or battery — could possibly be coming from.

Heaviside was fully aware that the energy flow diverged into the wire was only a minuscule fraction of the total. He was fully aware that the remaining component was so huge that the energy flow vector remaining — after the divergence of the Poynting component into the circuit — was still almost parallel to the conductors. However, he had no explanation at all of where such an enormous and baffling energy flow could possibly originate.

Had Heaviside strongly stated the enormity of the nondiverged component of the energy flow, he would have been viciously attacked and scientifically discredited as a perpetual motion advocate. His words were measured and cautious, but there is no doubt that he recognized the enormity of the nondiverged EM energy flow component.

Lorentz then entered the EM energy flow scene to face the terrible problem so quietly raised by Heaviside. Lorentz understood the presence of the Poynting component, and also of the extra Heaviside component, but could find no explanation for the startling, enormous magnitude of the EM energy pouring out of the terminals of the power source (pouring from the source dipole) if the Heaviside component was accounted.

Unable to solve the dark energy flow problem by any rational means, Lorentz found a clever way to avoid it. He reasoned that the nondiverged Heaviside component was "physically insignificant" (his term) because it did not even enter the circuit. Since it did nothing of any physical consequences, or so he reasoned, then it could just be discarded. So Lorentz simply integrated the entire energy flow vector (the vector representing the sum of both the Heaviside nondiverged component and the Poynting diverged component) around an assumed closed surface enclosing any volume of interest. A priori, this mathematical procedure discards the dark Heaviside energy flow component because of its nondivergence. It retains only the intercepted Poynting diverged component that enters the circuit.

See E. R. Laithwaite, “Oliver Heaviside – establishment shaker,” Electrical Review, 211(16), Nov. 12, 1982, p. 44-45.

Laithwaite felt that Heaviside’s postulation that a flux of gravitational energy combines with the (ExH) electromagnetic energy flux, could shake the foundations of physics. Quoting from Laithwaite: “Heaviside had originally written the energy flow as S = (E x H) + G, where G is a circuital flux.

Poynting had only written S = (E x H). Taking p to be the density of matter and e the intensity of a gravitational force, Heaviside found that the circuital flux G can be expressed as pu - ce, where u represents the velocity of p and c is a constant.”

One of the rather "bad examples" of ubiquitous errors in electrodynamics is the conventional illustration of a so-called planar EM wavefront moving through space."

Dr. Robert H. Romer, former Editor of the American Journal of Physics, also chastised the diagram shown above, purporting to illustrate the transverse plane wave traveling through 3-space. In endnote 24 of his noteworthy editorial, Dr. Romer takes that diagram to task as follows:

"…that dreadful diagram purporting to show the electric and magnetic fields of a plane wave, as a function of position (and/or time?) that besmirch the pages of almost every introductory book. …it is a horrible diagram. 'Misleading' would be too kind a word; 'wrong' is more accurate." "…perhaps then, for historical interest, [we should] find out how that diagram came to contaminate our literature in the first place."

Ether = subquark strings travelling in double torsion fashion (one string is made up of dextrorotatory subquarks, the other string consists of laevorotatory subquarks)

“The fact is that Maxwell’s core ideas in electromagnetism had their origins in a sea of molecular vortices exactly along the lines of what Tesla and Sir Oliver Lodge were referring to.

Maxwell identified the cause of magnetic repulsion in terms of the centrifugal pressure arising in a sea of molecular vortices. He identified the mechanism for the force on a current carrying wire, and also for motionally induced EMF, in terms of differential centrifugal pressure in this sea of molecular vortices. He explained time varying electromagnetic induction on the basis that the tiny vortices in space are acting like fly-wheels.

There was another curl equation in Maxwell’s original list of 1864, but it does not appear in modern sets of ‘Maxwell’s Equations’. This very important curl equation,

curl A = μH (2)

which relates to the fly-wheel nature of the magnetic field, is played down nowadays in favour of the much less informative equation, div B = 0, which is obtained by taking the divergence of equation (2).

 The third curl equation, which appeared in both Maxwell’s original listing and in modern listings, is Ampčre’s circuital law, and Maxwell is most famous for having extended it to include the concept of displacement current. The displacement current concept was purely Maxwell’s own idea, although Maxwell’s concept of it bears no relationship to the term which bears the same name in modern textbooks.

Therefore, contrary to what is taught in modern textbooks, Maxwell’s version of Ampčre’s Circuital Law does not mean that a changing electric field induces a magnetic field. In the context of an electromagnetic wave, both of these two curl equations must refer to a situation in which the changing magnetic field of a primary circuit induces an electric field in a secondary circuit. The displacement in question, as Maxwell initially suspected, is an angular displacement, which takes place in the fine-grained electric circuits (rotating electron-positron dipoles) which fill all of space, and which press against each other with centrifugal force while striving to dilate. Every cubic picometre of space contains a two-pin electric power point (a rotating electron-positron dipole). These power points exist everywhere and they connect the universe to the source of its animation. Electromagnetic waves are a propagation of angular acceleration (or precession) through this electric sea of tiny aethereal vortices and the undulations correspond to oscillations in fine-grained centrifugal pressure. These pressure oscillations are caused by an excess outflow of aether from the positrons of the electric sea.” (section 8, pages 8-9, exposes one of the deceptions used to hide the original form and meaning of Maxwell's aether equations)

« Last Edit: June 27, 2023, 02:08:07 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #344 on: December 27, 2016, 02:14:53 AM »

In earlier experimentation, Thomas Townsend Brown had made the startling discovery that a Coolidge X-ray tube exhibited thrust when charged to high voltage. It took Brown a while to realize that the motion was not caused by the X rays themselves, but by the electricity coursing through the tube. Brown went on to develop a device he called the "Gravitor," an electrical condenser sealed in a Bakelite case, that would exhibit a one percent weight gain or a one percent weight loss when connected to a 100-kilovolt power supply.

"The Gravitor, in all reality, is a very efficient motor. Unlike other forms of motor, it does not in any way involve the principles of electromagnetism, but instead it utilizes the principles of electro-gravitation.

A simple gravitor has no moving parts, but is apparently capable of moving itself from within itself. It is highly efficient for the reason that it uses no gears, shafts, propellers or wheels in creating its motive power. It has no internal mechanical resistance and no observable rise in temperature. Contrary to the common belief that gravitational motors must necessarily be vertical-acting, the gravitor, it is found, acts equally well in every conceivable direction."

T.T. Brown, 1929

In 1955, he went to work for the French aerospace company SNCASO—Société Nationale de Constructions Aéronautiques du SudOuest. During this one-year research period, he ran his discs in a vacuum. If anything, they worked better in a vacuum.

Since the time of the first test the apparatus and the methods used have been greatly improved and simplified. Cellular "gravitators" have taken the place of the large balls of lead. Rotating frames supporting two and four gravitators have made possible acceleration measurements. Molecular gravitators made of solid blocks of massive dielectric have given still greater efficiency. Rotors and pendulums operating under oil have eliminated atmospheric considerations as to pressure, temperature and humidity.

The disturbing effects of ionization, electron emission and pure electro-statics have likewise been carefully analyzed and eliminated. Finally after many years of tedious work and with refinement of methods we succeeded in observing the gravitational variations produced by the moon and sun and much smaller variations produced by the different planets.

Let us take, for example, the case of a gravitator totally immersed in oil but suspended so as to act as a pendulum and swing along the line of its elements.

When the direct current with high voltage (75-300 kilovolts) is applied the gravitator swings up the arc until its propulsive force balances the force of the earth's gravity resolved to that point, then it stops, but it does not remain there. The pendulum then gradually returns to the vertical or starting position even while the potential is maintained. The pendulum swings only to one side of the vertical. Less than five seconds is required for the test pendulum to reach the maximum amplitude of the swing but from thirty to eighty seconds are required for it to return to zero.

(T.T. Brown, How I Control Gravitation, 1929)

“Brown’s first experiments consisted of two lead spheres connected by a nonconductive glass rod, like a dumbell. One sphere was charged positive, the other negative, with a total of 120 kilovolts between them. This formed a large electric dipole. When suspended, the system moved toward the positive pole, arcing upwards and staying there against the force of gravity tugging downward. This showed that electric dipoles generate self-acceleration toward the positive pole. This experiment was repeated in oil, in a grounded tank, proving that ion wind was not responsible.

Improved versions of this setup replaced the lead spheres with metal plates, and glass rod with dielectric plates or blocks. This created a high voltage parallel plate capacitor with one or more layers. Brown’s British patent #300,111 – issued in 1927 – described what he termed a “cellular gravitator” consisting of numerous metal plates interleaved with dielectric plates, the entire block wrapped in insulating material and end plates connected to output electrodes and a spark gap to limit the input voltage. This device produced significant acceleration.

Brown’s 1927 patent described a self-contained device that exhibited no ion wind effects and relied solely upon the electrogravitational action arising from the electric dipoles within the gravitator-capacitor.”

“When a high voltage (~30 kV) is applied to a capacitor whose electrodes have different physical dimensions, the capacitor experiences a net force toward the smaller electrode (Biefeld-Brown effect).

The calculations indicate that ionic wind is at least three orders of magnitude too small to explain the magnitude of the observed force on the capacitor (in open air experiments).”
In the Paris test miniature saucer type airfoils were operated in a vaccum exceeding 10-6mm Hg. Bursts of thrust (towards the positive) were observed every time there was a vaccum spark within the large bell jar.

Condensers of various types, air dielectric and barium titanate were assembled on a rotary support to eliminate the electrostatic effect of chamber walls and observations were made of the rate of rotation. Intense acceleration was always observed during the vacuum spark (which, incidentally, illuminated the entire interior of the vacuum chamber). Barium Titanate dielectrique always exceeded air dielectric in total thrust. The results which were most significant from the standpoint of the Biefeld-Brown effect, was that thrust continued, even when there was no vacuum spark, causing the rotor to accelerate in the negative to positive direction to the point where voltage had to be reduced or the experiment discontinued because of the danger that the rotor would fly apart.

In short, it appears there is strong evidence that Biefeld-Brown effect does exist in the negative to positive direction in a vacuum of at least 10-6 Torr. The residual thrust is several orders of magnitude larger than the remaining ambient ionization can account for. Going further in your letter of January 28th, the condenser "Gravitor" as described in my British patent, only showed a loss of weight when vertically oriented so that the negative-to-postive thrust was upward. In other words, the thrust tended to "lift" the gravitor."

T.T. Brown, 1973

“The initial experiments conducted by Townsend Brown, concerning the behavior of a condenser when charged with electricity, had the characteristic of simplicity which has marked most other great scientific advancements.

The first startling revelation was that if placed in free suspension with the poles horizontal, the condenser, when charged, exhibited a forward thrust toward the positive poles. A reversal of polarity caused a reversal of the direction of thrust. The experiment was set up as follows:

The antigravity effect of vertical thrust is demonstrated by balancing a condenser on a beam balance and then charging it. After charging, if the positive pole is pointed upward, the condenser moves up.

If the charge is reversed and the positive pole pointed downward, the condenser thrusts down. The experiment is conducted as follows:"

VACUUM TEST #1 (includes all necessary technical information and the video itself)

At the pressure of 1.72 x 10^-6 Torr ( High Vacuum conditions ), the apparatus rotates when the High Voltage is increased from 0 to +45 KV.

VACUUM TEST #2 (includes technical information and video)

VACUUM TEST #3 (includes technical information and video)



In 1955 and 1956 Townsend Brown made two trips to Paris where he conducted tests of his electrokinetic apparatus and electrogravitic vacuum chamber tests in collaboration with the French aeronautical company Société National de Construction Aeronautiques du Sud Ouest (S.N.C.A.S.O.) .

In addition the Project Montgolfier team constructed a very large vacuum chamber for performing vacuum tests of smaller discs at a pressure of 5 X 10-5 mm Hg:

The report says that under high vacuum conditions the discs always moved in the direction of the positive pole, regardless of the polarity on the outboard wire. 

These vacuum chamber experiments were a decisive milestone in that they demonstrated beyond a doubt that electrogravitic propulsion was a real physical phenomenon. 


When the DISK SHAPED CAPACITOR WAS USED, the total deviation/movement was A FULL 30 DEGREES (deviation totale du systeme 30 degre).

VIDEO: BIEFELD-BROWN EFFECT, balancing a condenser on a beam balance (includes three videos of the experiment)

« Last Edit: May 11, 2017, 05:35:05 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #345 on: December 28, 2016, 12:09:16 AM »

“Dr. Francis Nipher, Professor of physics, Washington University, St. Louis, Missouri, did some of the pioneering electrogravitics work at Washington University in St. Louis back around the turn of the last century. He applied high voltage to lead balls, lead spheres and hollow metal boxes and compared the repulsive effect induced in small test spheres hung vertically near them, similar to the original Cavendish experiments but with high voltage. Dr. Nipher went to great lengths to insert protective, grounded screens of glass between the solid lead spheres and the suspended balls to rule out electrostatic effects.”

Before connecting any form of electric current to the modified Cavendish apparatus, Prof.  Nipher took special precaution to carefully screen the moving element from any electrostatic or electromagnetic effects. His apparatus briefly consists of two large lead spheres ten inches in diameter, resting upon heavy sheets of hard rubber. Two small lead balls, each one inch in diameter, were now suspended from two silk threads, stationed at the sides of the two large lead spheres, from which they were separated by a little distance. Moreover, the suspended balls were insulated elaborately from the large spheres by enclosing them first airtight in a long wooden box, which was also covered with tinned iron sheets as well as cardboard sheets. There was, furthermore, a metal shield between the box and the large metal spheres. The large metal lead spheres now exerted a certain gravitational force upon the suspended small lead balls … and the small lead balls were slightly moved over towards the large spheres.

In further experiments Prof.  Nipher decided to check his results. To do this he replaced the large solid lead spheres with two metal boxes, each filled with loose cotton batting. These hollow boxes (having practically no mass) rested upon insulators. They were separated from the protective screen by sheets of glass and were grounded to it by heavy copper wires. The metal boxes were then charged in every way that the solid lead spheres had been, but not the slightest change in the position of the lead balls could be detected. This would seem to prove conclusively that the "repulsion" and "gravitational nullification" effects that he had produced when the solid balls were electrically charged were genuine and based undoubtedly on a true inter-atomic electrical reaction, and not upon any form of electrostatic or electromagnetic effects between the large and small masses. If they had been, the metal boxes, with no mass, would have served as well as the solid balls.

The relationship between gravitation and the electric field was first observed experimentally by Dr. Francis Nipher. Nipher's conclusion was that sheilded electrostatic fields directly influence the action of gravitation. He further concluded that gravitation and electrical fields are absolutely linked.

New Evidence of a Relation Between Gravitation & Electrical Action (1920)
Gravitational Repulsion (1916)
Gravitation & Electrical Action (1916)
Can Electricity Reverse the Effect of Gravity? (1918)

The relationship between gravitation and the electric field was first observed experimentally by Dr. Francis Nipher. Dr. Francis Nipher conducted extensive experiments during 1918, on a modified Cavendish experiment. He reproduced the classical arrangements for the experiment, where gravitational attraction could be measured between free-swinging masses, and a large fixed central mass. Dr. Nipher modified the Cavendish experiment by applying a large electrical field to the large central mass, which was sheilded inside a Faraday cage. When electrostatic charge was applied to the large fixed mass, the free-swinging masses exhibited a reduced attraction to the central mass, when the central mass was only slightly charged. As the electric field strength was increased, there arose a voltage threshold which resulted in no attraction at all between the fixed mass and the free-swinging masses. Increasing the potential applied to the central mass beyond that threshold, resulted in the free-swinging masses being repelled (!) from the fixed central mass. Nipher's conclusion was that sheilded electrostatic fields directly influence the action of gravitation. He further concluded that gravitation and electrical fields are absolutely linked.

"These results seem to indicate clearly that gravitational attraction between masses of matter depends upon electrical potential due to electrical charges upon them."

Every working day of the following college year has been devoted to testing the validity of the above statement. No results in conflict with it have been obtained. Not only has gravitational attraction been diminished by electrification of the attracting bodies when direct electrical action has been wholly cut off by a metal shield, but it has been made negative. It has been converted into a repulsion. This result has been obtained many times throughout the year. On one occasion during the latter part of the year, this repulsion was made somewhat more than twice as great as normal attraction."

Increasing the potential applied to the central mass beyond that threshold, resulted in the free-swinging masses being repelled (!) from the fixed central mass. Nipher's conclusion was that sheilded electrostatic fields directly influence the action of gravitation. He further concluded that gravitation and electrical fields are absolutely linked.

Dr. Francis Nipher one of the most distinguished physicists of the United States:

« Last Edit: December 28, 2016, 01:31:38 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #346 on: December 28, 2016, 01:28:46 AM »

The rate of acceleration of a falling object, which acquires kinetic energy is a measure of energy flow via conduction through the ether.

The velocity of a free-falling object is changing by approximately 9.8 m/s every second because of the constant pressure applied by ether strings on matter.

On a flat surface of the Earth, the g force equals π2 and is not related at all to the mass/radius of the Earth.

π is the inverse value of the sacred cubit:

1 sacred  cubit = 2/π

g = π2 = 4/sc2 m/s2

There are several values to be used for the sacred cubit depending on the color of the light spectrum: starting from 0.62832 all the way to 0.64 – the most important value is of course 0.63566, the sacred cubit.

The color spectrum of the subquark is related directly to each and every form of force known to physics: from gamma radiation and x-rays to microwaves and radio waves.

The darker colors correspond to the terrestrial gravity force (aether sink), the brigher colors to the antigravitational force (aether source).

Subquark = Magnetic Monopole = Graviton

If we select the value of the sacred cubit corresponding to a darker color, 0.63855, then we obtain g = 9.81 m/s2. (formula for g in terms of ether pressure and ether density, pg 4) (section 3, Gravitational Pendulums, g related to π2)

18 x (1/sc2)-1 = 7.273 (height of the apex of the Gizeh Pyramid)

286.1 sacred inches = 7.273 meters (286.1 = the displacement factor of the Gizeh Pyramid = 450 sacred cubits)

Thus, the value of g is directly related to the value of the first zero of Riemann’s Zeta function.

Kinetic theory of terrestrial gravitation: (ether pressure theory with formulas; G constant related to the density of ether; as is known from astrophysics, G can be shown to be directly connected to density in a single formula) (calculation of the density of ether)

“This implies an important conclusion: bodies of different volumes that are in the same gradient medium acquire the same acceleration.

Note that if we keep watch on the fall of bodies of different masses and volumes in the Earth’s gravitation field under conditions when the effect of the air resistance is minimized (or excluded), the bodies acquire the same acceleration. Galileo was the first to establish this fact. The most vivid experiment corroborating the fact of equal acceleration for bodies of different masses is a fall of a lead pellet and bird feather in the deaerated glass tube. Imagine we start dividing one of the falling bodies into some parts and watching on the fall of these parts in the vacuum. Quite apparently, both large and small parts will fall down with the same acceleration in the Earth’s gravitation field. If we continue this division down to atoms we can obtain the same result. Hence it follows that the gravitation field is applied to every element that has a mass and constitutes a physical body. This field will equally accelerate large and small bodies only if it is gradient and acts on every elementary particle of the bodies. But a gradient gravitation field can act on bodies if there is a medium in which the bodies are immersed. Such a medium is the ether medium. The ether medium has a gradient effect not on the outer sheath of a body (a bird feather or lead pellet), but directly on the nuclei and electrons constituting the bodies. That is why bodies of different densities acquire equal acceleration.

Equal acceleration of the bodies of different volumes and masses in the gravitation field also indicates such an interesting fact that it does not matter what external volume the body has and what its density is. Only the ether medium volume that is forced out by the total amount of elementary particles (atomic nuclei, electrons etc.) matters. If gravitation forces acted on the outer sheath of the bodies then the bodies of a lower density would accelerate in the gravitation field faster than those of a higher density.

The examples discussed above allow clarifying the action mechanism of the gravitation force of physical bodies on each other. Newton was the first to presume that there is a certain relation between the gravitation mechanism and Archimedean principle. The medium exerting pressure on a gravitating body is the ether.” (I. Newton letter to R. Boyle)

4. When two bodies moving towards one another come near together, I suppose the aether between them to grow rarer than before, and the spaces of its graduated rarity to extend further from the superficies of the bodies towards one another; and this, by reason that the aether cannot move and play up and down so freely in the strait passage between the bodies, as it could before they came so near together.

5. Now, from the fourth supposition it follows, that when two bodies approaching one another come so near together as to make the aether between them begin to rarefy, they will begin to have a reluctance from being brought nearer together, and an endeavour to recede from one another; which reluctance and endeavour will increase as they come nearer together, because thereby they cause the interjacent aether to rarefy more and more. But at length, when they come so near together that the excess of pressure of the external aether which surrounds the bodies, above that of the rarefied aether, which is between them, is so great as to overcome the reluctance which the bodies have from being brought together; then will that excess of pressure drive them with violence together, and make them adhere strongly to one another, as was said in the second supposition.

The origin of biochirality is to be found in the physics of the subquark:

The sacred cubit and the Gizeh Pyramid advanced calculus:

The sacred cubit and the zeros of the Riemann Zeta function: (four consecutive messages)

« Last Edit: July 05, 2020, 09:32:49 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #347 on: January 05, 2017, 02:42:40 AM »

hfrustum = 141.34725 meters

Zeta function first zero = 14.134725 (structure of a boson: two pyramids facing each other)

Relationship between the zeta function and music theory:

In a sense, if the zeros of the zeta function are like musical notes, then prime numbers
are chords, and theorems about these entities are symphonies, says quantum chaologist
Michael Berry of the University of Bristol in England. The Riemann hypothesis
imposes a pleasing harmony on the zeta-zero notes, he adds.

One of the first inklings of a connection between number theory and quantum
mechanics came in 1972. Hugh Montgomery of the University of Michigan had
discovered a formula that describes the average spacing between consecutive zeros
(values of b) of the zeta function. During a visit to the Institute for Advanced Study in
Princeton, N.J., he showed his result to quantum physicist Freeman Dyson, when they
happened to meet at afternoon tea.

Dyson immediately recognized it as identical to the result obtained for so-called
random matrix models, which are used to describe the energy levels of large atoms or
heavy nuclei. By an amazing coincidence, Dyson was one of just a handful of
physicists in the world who had done such calculations and could appreciate
Montgomery's work.

14.134725 x 180 = 2544.25

0.0254425 = one sacred inch

0.0254425 x 25 = 0.6360625 = one sacred cubit

14.134725 - 4π = 1.568354 (value of the width of the first section from the queen chamber niche)

There is a very interesting connection between the dimensions of the Gizeh pyramid and the value of the next zero of Riemann's zeta function.

21.022 - 6π = 2.172476 (this would be the value of the width of the first section in the queen chamber niche, in a pyramid whose hfrustum = 210.22 meters)

Queen chamber niche measurements

First step – w 1.568m / l 1.0414 m / h 1.743 m
Second step – w 1.34 m / l 1.0414 m / h 0.87266 m
Third step – w 1.062 m / l 1.0414 m / h 0.69733 m
Fourth step - w = 0.773 m / l = 1.0414 m / h = 0.69733 m
Fifth step - w = 0.5156 m / l = 1.0414 m / h = 0.69733 m

Total width = height of masonry base of the Gizeh pyramid

141.34725 - 5.23 = 136.1

5.23/1.562 = 3.3482

3.3482 x 2.1724 = 7.2737 (hGizeh apex = 286.1 sacred inches)

210.22 - 7.2737 = 202.94

202.94/2.5424 = 79.822

Then, the value of the apex of the 210.22 meters pyramid will be: 16.1886

16.1886 meters = 636.3 sacred inches

Height of apex of pyramid: 7.2738 units (286.1 si)

Two apexes in merkabah formation: 9.245 units total height ( )

The distance from the very center of the boson to the apex of the sothic triangle which embeds the merkabah triangles will measure exactly 14.134725.

The projection of the helical sound wave vortex, in the form a very complex spiral, on the center line will represent the values of the non-trivial zeros of the zeta function.

This is the significance of the relationship between quantum mechanics and Riemann's zeta function.

I believe that there must a deep connection between the seven musical notes (as expressed in the grand gallery measurements), the five elements theory (as expressed in the subterranean chamber/queen chamber/king chamber/djed/apex sections) and the distribution/values of the zeros of the zeta function.

Then, the FA-MI interval would correspond to the Lehmer phenomenon (a pair of zeros which are extremely close).

The increasing density of the roots would be related to cymatics, and to the displacement factor, 286.1 value, of the Gizeh pyramid.

This, then, would constitute the basis to find an exact pattern for the values of the zeros of the zeta function, and possibly a way to prove Riemann's hypothesis.

The Gizeh pyramid is the architectural equivalent of Riemann's Zeta function.

The significance of Lehmer's phenomenon:

7954022502373.43289015387 and 7954022502373.43289494012

18580341990011.1593414105 and 18580341990011.1593364110

20825125156965.3882387859 and 20825125156965.3882484837

The derivation of the Riemann-Siegel formula:

Zeta function zeroes gaps:

Riemann's Hypothesis:

List of zeroes of the zeta function:

« Last Edit: September 06, 2017, 02:42:46 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #348 on: January 05, 2017, 12:04:13 PM »
The increasing density of the roots would be related to cymatics, and to the displacement factor, 286.1 value, of the Gizeh pyramid.

The average gap (spacing) between consecutive zeros of Riemann's Zeta function on the critical line in the vicinity of the value z, is:


But this is actually a sacred cubit formula:

4/{sc x ln[(z x sc)/4]} = ((where sc = one sacred cubit = 2/π))

4/{sc x lnz  +  [sc x ln(sc/4)]} =

4/{lnzsc  +  [sc x ln(2.861/18)]} =

4/{lnzsc  -  5sc/(2 x 1.361)}

Let us remember how I put to good use the xsc term here:

For z = 1 x 106, we get

average spacing = 1/1.9063 = 1/3sc

In the following paper, we can see further sacred cubit equations (of course, the author does not realize he is using sacred cubits):

1.27 = 2sc

1.557/1.361 = 1.144 = 4 x 0.286
« Last Edit: January 05, 2017, 12:09:11 PM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #349 on: January 07, 2017, 03:06:50 AM »

"In purely mathematical terms, the computations indicate that the spacings between consecutive zeroes of the zeta function behave, statistically, like the spacings between consecutive eigenvalues of large, random matrices belonging to a class known as the
Gaussian Unitary Ensemble. It was in precisely this context that the zeta function first caught the eye of physicists, in the early 1970’s.

One of the most exciting possibilities involves an astonishing, unexpected connection
between the distribution of prime numbers and the energy levels of excited atoms. The
vehicle is a branch of mathematics known as random matrix theory.

If the random matrices belong to a class of matrices known as the Gaussian Unitary
Ensemble (GUE), physicists obtain good estimates of the average spacing between
consecutive energy levels of heavy atomic nuclei and other complex quantum systems.
It turns out that the spacings between consecutive zeros of the zeta function also
appear to behave statistically like the spacings between consecutive eigenvalues of
these large, random matrices. Indeed, this observation also suggests that the infinitely
many zeros specified in the Riemann hypothesis are irregularly distributed in a
particular way along the line 1/2 + bi."

The correlation between the arrangement of the Riemann zeroes and the energy levels of quantum chaotic systems means that the zeta function can describe the very intricate quantum physics on an infinitesimal level.

How then could this extraordinary mathematical relationship arise out of a totally random process described by the big bang theory/stellar evolution hypotheses?

The Lehmer phenomenon (two zeros of the zeta function are so close together that it is unusually difficult to find a sign change between them) also points out to the extraordinary intricacies which the zeta function possesses.

(zeros of the Riemann-Siegel function Z(t) alternate with Gram points)

The highly nonlinear (apparently random) distribution of the zeros is due to the structure of the Riemann-Siegel formula:

The relationship between the 1/nσ x cos(2πxlogn) function and music theory:

A very good and accessible introduction to the theory of Riemann's zeta function:

On Riemann's hypothesis:

« Last Edit: May 12, 2018, 06:36:08 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #350 on: January 07, 2017, 07:46:45 AM »

Let us imagine two Gizeh pyramids facing each other: 534 total units (the very reason the Tibetan monks used a single 534 hz drum in the granite block levitation described earlier).

In the center we have the two apexes which rotate in the shape of a merkabah (one is the shadow of the other).

174.53 units = hfrustum (141.134725) + hsubterranean chamber/pit

534 - 2x174.53 = 184.94

184.94 = 7.2738 x 25.425

Distance from the center of the parabindu (merkabah consisting of the two rotating apexes) to where the sound wave is emitted/manifests itself:


14.134725 x 2 = 28.26945

184.94 - 28.26945 = 156.67 units left

The length/path allowed for the sound wave to propagate:

63.566 (or 63.6363) units

156.67 - 2x63.566 = 29.4

29.4/7.2738 = 10 x 0.40417

0.40417 = 1sc2

29.4 - 28.26945 = 1.13055 = 8 x 0.14134725

The extra 29.4 units means that there will be a second sound wave emitted from the pyramid frustums which starts at 14.134725 units from the tip of the frustum.

This means that there will be a second Riemann's zeta function wave which will propagate in a direction opposite to that of the first zeta function wave (which starts from the apex itself).

These waves will travel only within the alloted (sacred) distance of 63.566... units.

14.134725 + 63.6363 = 77.771025

Zero #19 = 77.145

Zero #20 = 79.3374 (1.566375 units in the opposite direction, 79.3374 - 77.771025 = 1.566375; the distance is added to the previous total of 77.771025 units)

In the currently accepted theory, the average gap/spacing formula between consecutive zeros is just a side note; in my opinion, it plays a far greater role in determining the values of the zeros of the zeta function.

2π/log(z/2π) = 4/{sc x lnz  +  [sc x ln(sc/4)]}

As a simple example,

the average spacing in the vicinity of 14.134725 will be:


14.134725 + 7.75 = 21.884725

(21.884725 - 21.022)/2.542 = 0.33938

0.534 x 1sc = 0.33938

Once the seven note scale (including the FA-MI interval) and the five elements recursion formula is incorporated into this scheme, we should have a better understanding of the relationship between the mathematics of the zeta function and the sacred cubit dimensions of the Gizeh pyramid.

« Last Edit: September 06, 2017, 02:43:40 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #351 on: January 08, 2017, 11:42:55 PM »

The formula for the average gap/spacing between consecutive zeros can be inferred immediately, as it basically is a sacred cubit formula relating the mean spacing to the lntsc term.

However, once we know this formula, we can easily obtain the first term of the number of zeros equation in certain segment:

N(t) = (t/2π)ln(t/2π)

The derivation is detailed here: pg 214

The next term in the formula, -t/2π can also be deduced using the certain symmetries offered by studying the zeta function in its proper context: the sacred cubit dimensions of the Gizeh pyramid.

In the first segment, 63.63 units in length, we will have 48 zeros (we do not count the two 14.134725 zeros as they constitute the starting point of the segment itself) for the two zeta functions: left to right, and right to left.

63.63/36 = 14.134725/8 = 1.7675

2.542 - 1.7675 = 0.775

2π/log(14.134725/2π) = 7.75

148.61/1.7675 = 84.08 = 53.44/1sc

How to derive the full number of zeros in a certain segment formula:

The reason for the accuracy of the Riemann-Siegel formula is that it includes in the cosine term all the features described above.

The phase of the cosine term is basically the formula for the number of zeros in a certain segment.

The frequency of the cosine term is directly related to the average gap/spacing formula.

The sum also includes a varying amplitude (n-1/2).

The mass of a boson is given by the standing wave created by the two zeta functions which propagate along the sacred 63.63 units distance.

« Last Edit: September 15, 2017, 02:05:08 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #352 on: January 11, 2017, 12:18:47 AM »

The Riemann-Siegel function Z(t) for values of t near 0. In dashed, the value of the zeta function

The asymptotic correction terms expressed in terms of elementary transcendental functions are replaced by terms evaluated through higher transcendental functions (the error function):

How B. Riemann applied the saddle-point method to obtain the asymptotic form of the correction terms:

The Riemann Zeros and Eigenvalue Asymptotics

An analysis of the Riemann-Siegel coefficients to very high order, with surprising conclusions:

“Improvement from adding more correction terms in Riemann-Siegel remainder is not to continue indefinitely.”

The complexity of the Riemann-Siegel coefficients (C20 (z) term):

An alternative to the Riemann-Siegel formula: improving the convergence of the Euler-Maclaurin expansion thereby greatly reducing the length of the main sum:

All these formulas, however, do not explain the distribution of the zeros of the zeta function and its connection to the quantum physical model, one of the most important goals of modern science.

How could the Tibetan monks have known the intricate interrelation between the values of the five elements of the Gizeh pyramid, 26.7, 53.4, 80, 136.1 and 534, and its relevance to the smallest particle in quantum physics (even smaller than the boson)?

The fact that the height of the Gizeh pyramid, excluding the masonry base, measures 136.1 meters was discovered in 1985.

It is obvious that the information about the values of the five elements was known to the architects of the Gizeh pyramid and that this information was obtained by the Tibetan priests/monks and kept under strict secrecy.

Then, another question arises: how could these architects have viewed the workings of the quantum physical model of the boson, and have realized that the standing waves inside the boson can be represented by zeta functions, and further, that the value of the first zero of the zeta function, 14.134725, multiplied by ten, will equal the height of the smallest particle to be found inside a boson (141.34725)?

The highest known possible ability to view object microscopically using psi faculties (a skill/aptitude available to all humans in dreams) is described here: (100% proof that these observations are real, as they describe the correct model of the atom before the discovery of isotopes and the correct placement of the elements within the periodic table, such as astatine, francium, and proctactinium decades before they were detected)

However, this ability cannot be employed further to investigate the interior of a boson.

And yet, the architects of the Gizeh pyramid knew the exact dimensions of the smallest particle to be found inside a boson and its relationship to the zeta function.
« Last Edit: May 12, 2018, 06:37:56 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #353 on: January 12, 2017, 12:37:32 AM »

The Riemann Zeta function takes the prize for the most complicated and enigmatic function.

The lack of a proof of the Riemann hypothesis doesn’t just mean we don’t know all the zeros are on the line x = 1/2, it means that, despite all the zeros we know of lying neatly and precisely smack bang on the line x = 1/2, no one knows why any of them do, for if we had a definitive reason why the first zero 1/2 + 14.13472514i has real value precisely 1/2, we would have a definitive reason to know why they all do.

A classic work on Riemann’s hypothesis:

On some reasons for doubting Riemann’s hypothesis:

List of zeroes of the zeta function:

In my opinion, the apparently random distribution of the zeros of the zeta function is totally related to the concept of a sacred cubit fractal (fractal: a mathematical set that exhibits a repeating pattern displayed at every scale).

Five elements of the Gizeh pyramid:


Sacred cubit distance allowed for the two zeta function waves: 63.636363... (we could also use 2/π)

Applying the five elements proportions to the sacred cubit distance:


(if we use 0.636619772 as a value for the sacred cubit, then 16.1773 becomes 16.18034, the exact value of phi x 10, the golden ratio)

21.022 - 14.134725 = 6.887275

6.887275 - 6.363 = 1/3sc


(14.134725 - 4π = 1.568354; the value of the width of the first section from the queen chamber niche)

16.1773 - 9.5445 = 6.6338


1.68632 - 0.99492 = 0.6914


9.5445 - 6.363 = 3.1815 (3.1815 = 5 sacred cubits)


0.80886 - 0.477225 = 0.331635


14.134725 + 6.6338 = 20.7685

14.134725 + 6.363 = 20.497725

14.134725 + 9.5445 = 23.679225

23.679225 + 1.68632 = 25.365545

23.679225 - 20.497725 = 3.1815

3.1815 = 5/[3 x (6.887275 - 6.363)]

23.679225 + 0.99492 + 0.33164 = 25.0057 (third zero of the zeta function = 25.0108)

We already know the main terms of the average spacing/gap between the zeta function zeros and of the number of zeros in a certain segment.

The sacred cubit fractal (dividing the critical line into 63.6363... segments, and further using the five elements proportions) is the hidden template of the zeta function.

14.134725 x 180 = 2544.25

0.0254425 = one sacred inch

0.0254425 x 25 = 0.6360625 = one sacred cubit

18 x 25 = 450

286.1 = 450 sacred cubits (where 286.1 is the famous displacement factor of the Gizeh pyramid)

« Last Edit: August 01, 2017, 02:42:38 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #354 on: January 12, 2017, 11:21:40 AM »

Sacred cubit structure of the Lehmer pairs

The Lehmer phenomenon (two zeros of the zeta function are so close together that it is unusually difficult to find a sign change between them) is closely related to the de Bruijn-Newman constant.

The results proven by N. C. de Bruijn and C. Newman show that there is a real constant
Λ, which satisfies −∞ < Λ ≤ 1/2.

The Riemann hypothesis is equivalent to the conjecture that Λ ≤ 0; no upper bound better than Λ ≤ 1/2 has been found so far, but several lower bounds have been calculated:

An improved bound for the de Bruijn-Newman constant: Λ > −1.14541×10-11

An improved bound for the de Bruijn-Newman constant: Λ > −2.7 × 10-9

A treatise which specializes in the calculation of Lehmer pairs (see pages 64-87 for a list):

On Lehmer pairs:

Lehmer pair #1


110 x 63.6363 = 6999.993

110 x 63.661977 = 7002.8175

The segment containing the Lehmer pair will be bounded by 6999.993 and 7063.5167, with 71 zeros for each zeta function.

136.1/110 = (1/sc)2/2

Lehmer pair #2


269 x 63.6363 = 17118.1647

269/14.134725 ~= 19

Lehmer pair #3


991 x 63.661977 = 63089.019
992 x 63.6363 = 63120.96

991/7.28 = 136.12

Lehmer pair #4


1047 x 63.6363 = 66627.2061

3141/3 = 1047

3141/2 = 1/0.63674 x 10-4

Lehmer pair #5 (only the first value of the pair will be included and the corresponding multiplier obtained by dividing by 63.6363... and/or 63.661977...; the complete list in the reference listed above, pg. 64-87)


1127 and 1126

1126/286.1 = 1/0.254085

Lehmer pair #6


2195 and 2194

(see next pair for the calculation)

Lehmer pair #7


2719 and 2718

2719/2195 = (1/sc)2/2

Lehmer pair #8


2799 and 2798

2799 x sc/2 x 3/5 = 534

Lehmer pair #9


2864 and 2862

2862/7.2738 = 1/2.5415 x 10-3

Lehmer pair #10


6881 and 6878

6878 = 24 x 286.5 ~= 16 x 136.5 x π

Lehmer pair #11


6908 and 6905

6908 ~= 123 x 4 = 700 x π2

6908 - 6881 = 27

Lehmer pair #12


10885 and 10888

10888 = 80 x 136.1

Lehmer pair #13


18372 and 18365

18365/2.67 = 6878.277 (see Lehmer pair #10)

There is a very interesting connection linking sacred cubit values and Lehmer pairs: these pairs are not located randomly on the critical line; they are related to certain sacred cubit figures. Once this sacred cubit law which governs the location of the Lehmer pairs is known in full, we can apply the sacred cubit fractal equations to find their values.

In my opinion, the Lehmer pair of zeros has to be in mathematical relationship with the zeros of the other zeta function located on the same 63.6363... segment, in the vicinity of the Lehmer pair.

On the extreme values of the zeta function:

« Last Edit: April 06, 2019, 04:32:46 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #355 on: January 15, 2017, 07:22:07 AM »

"What did these revelations over tea at Princeton mean for the Riemann Hypothesis? If the  points at sea level in Riemann's landscape can be explained by the mathematics of energy  levels in physics, then there was the exciting prospect of actually proving why the points at sea level lie in a straight line. A zero off the line would be like having an imaginary energy  level,  something  not  permitted  by  the  equations  of  quantum  physics.  Here  was  the  best  hope yet for providing some sort of explanation for Riemann's Hypothesis.

In  1989  Odlyzko  plotted  the  distances  between  the  zeros and compared  it to  Montgomery's prediction. This time the fit was staggering. Here  was  convincing  evidence  of  a  new  aspect  of  the  zeros.  From  as  far  away  as  1020  they  were  sending  out  the  very  clear  message  that  they  were  being  produced  by  some complicated mathematical drum.

Berry's  interest  in  the  primes  coincided  with  his  growing  understanding  of  the  differences  between  the  statistics  of  energy  levels  in  electrons  playing  quantum  billiards  and  the  energy levels in a random quantum drum. 'I thought it might be interesting to look again at  the story of the Riemann zeros and Dyson's ideas in the light of the new connections with  quantum chaos.' Would the special statistics that Berry had discovered in the energy levels  of  quantum  billiards  be  reflected  in  the  statistics  of  the  zeros  of  the  Riemann  zeta  landscape? 'I thought it would be very nice to see if the zeros actually behaved in this way,  and  I  did  some  rough  calculations.'  But  he  didn't  have  enough  data.  'Then  I  heard  of  Odlyzko,  who'd  done  these  epic  calculations.  I  wrote  to  him  and  he  was  wonderfully  helpful. He explained to me that he'd been a little worried because his calculations beyond  a  certain  point  had  started  to  show  some  deviations.  He  thought  he  must  have  made  a mistake in his computations.'

But  Odlyzko  did  not  have  the  insights  of  a  physicist.  When  Berry  compared  the  zeros  to  the   energy   levels   of   chaotic   quantum   billiards,   he   found   a   perfect   match.   The  discrepancies  that  Odlyzko  had  observed  turned  out  to  be  the  first  sign  of  the  difference between the statistics of frequencies in a random quantum drum and the energy levels of  chaotic quantum billiards. He had not been aware of this new chaotic quantum system, but Berry recognised it straight away: This  was  a  great  moment  because  it  was  obviously  right.  That  was  to  me  absolute  convincing  circumstantial  evidence  that  if  you  think  the  Riemann  Hypothesis  is  true,  then the Riemann zeros would have underlying them not just a quantum system, but a quantum system  with  a  classical  counterpart,  moderately  simple  but  chaotic.

It was all very fascinating seeing the same pictures cropping up in both  areas,  but who could  point to  some  genuine  contribution  to  prime  number  theory  that  these  connections  had  made possible?  Peter Sarnak  offered  the  quantum  physicists  a  challenge:  use  the  analogy  between quantum  chaos  and  prime  numbers  to  tell  us  something  we  don't  already  know  about Riemann's landscape - something specific that couldn't be hidden behind statistics.

There  are  certain  attributes  of  the  Riemann  zeta  function,  called  its  moments,
which  it  was  known  should  give  rise  to  a  sequence  of  numbers.  The  trouble  was  that mathematicians  had very  little  clue  as  to  how  to  calculate  the sequence  itself.

Before  the  Seattle  meeting,  Conrey  had  done  a  huge  amount  of  work  on  the  problem  of the  next  number  in collaboration  with  a  colleague, Amit  Ghosh,  which  suggested  that  the third number in the sequence (after 1 and 2) was a big jump away, at 42. For Conrey, that this should be the number next in the sequence  'was  kind  of  surprising'.

In  the  meantime,  Conrey  had  joined  forces  with  another  mathematician,  Steve  Gonek. With a  huge  effort,  squeezing all  they  could  from  their  knowledge  of  number  theory,  they came  up  with  a  guess  for  the  fourth  number  in  the  sequence  -  24,024.  'So  we  had  this sequence: 1, 2, 42, 24,024, . . . We tried like the Dickens to guess what the sequence was. We knew our method couldn't go any further because it was giving a negative answer for the next number in the sequence.' It was known that all the numbers in the sequence were bigger  than  zero.  Conrey  arrived  at  Vienna  prepared  to  talk  about  why  they  thought  the next number in the sequence was 24,024.

'Keating  arrived  a  little  late.  On  the  afternoon  he  was  going  to  give  his  lecture  I  saw  him, and  I'd  seen  his  title  and  I  had  begun  to  wonder  whether  he  had  got  it.  As  soon  as  he showed  up  I  went  and  immediately  asked  him,  "Did  you  figure  it  out?"  He  said  yes,  he'd got the 42.' In fact, with his graduate student, Nina Snaith, Keating had created a formula that  would  generate  every  number  in  the  sequence.  'Then  I  told  him  about  the  24,024.' This  was  the  real  test.  Would  Keating  and  Snaith's  formula  match  Conrey  and  Gonek's guess of 24,024? After all, Keating had known that he was meant to be getting 42, so he might  have  cooked  his  formula  to  get  this  number.  This  new  number,  24,024,  was completely new to Keating and not one he could fake."

(from Music of the Primes)

The University of Bristol has been at the forefront of showing that there are striking similarities between the Riemann zeros and the quantum energy levels of classically chaotic systems.

From a conference in 1996 in Seattle, aimed at fostering collaboration between physicists and number theorists, came early evidence of correlation between the arrangement of the Riemann zeroes and the energy levels of quantum chaotic systems. If this were true it would prove the Riemann hypothesis.

Now, there are certain attributes of the Riemann zeta function called its moments which should give rise to a sequence of numbers. However, before the Seattle conference, only two of these moments were known: 1, as calculated by Hardy and Littlewood in 1918; and 2, calculated by Ingham in 1926.

The next number in the series was suggested as 42 by Conrey (now also at Bristol) and Ghosh in 1992.

The challenge for the quantum physicists then, was to use their quantum methods to check the number 42 and to calculate further moments in the series, while the number theorists tried to do the same using their methods.

Prof Jon Keating and Dr Nina Snaith at Bristol describe the energy levels in quantum systems using random matrix theory. Using RMT methods they produced a formula for calculating all of the moments of the Riemann zeta function. This formula confirmed the number 42.

Two years after Seattle, Keating and Snaith attended a follow-up conference at the Schrodinger Institute in Vienna to present their formula. Meanwhile, number theorists Conrey and Gonek had suggested the next moment in the series.

When Keating and Snaith's formula was used to calculate this moment, it coincided with the number theorists' suggestion: 24,024. The formula really works.

Usually pure mathematics supports physics, supplying the mathematical tools with which physical systems are analysed, but this is a case of the reverse: quantum physics is leading to new insights into number theory.

These are the numbers found so far in the sequence:

1, 2, 42, 24024; the next one should be 701149020 (taken from the Young tableaux)

But these numbers are totally related to the sacred cubit.

24024/286 = 84 = 42 x 2

84 x 1sc = 53.4

(286.1 is the displacement factor of the Gizeh pyramid)

701149020/286= 2451570

2451570/30 = 81719

2862 = 81796

81796 - 81719 = 77

7 = 2.5442 x 1/(1 - 1sc)

2863 = 23393656

23393656 x 30 = 701809680

701809680 - 701149020 = 660660

660660/286 = 2310 = 77 x 30
« Last Edit: January 15, 2017, 07:48:50 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #356 on: January 16, 2017, 12:23:09 AM »

“There’s a complexity to the zeta function that we have not been able to grasp"

"At the beginning of the 1970s, one mathematician stood at the head of this small band of sceptics. Don Zagier is one of the most energetic mathematicians on today's mathematical circuit,  cutting  a  dashing  figure  as  he  sweeps  through  the  corridors  of  the  Max  Planck Institute for Mathematics in Bonn, Germany's answer to the Institute for Advanced Study in Princeton.

But  Zagier  recognised  that  300  million  zeros  represented  an  important  watershed.  There were theoretical reasons why the first several thousand zeros had to be on Riemann's ley line.  However,  as  one  advanced  farther  north,  the  reasons  why  early  zeros  had  to  be  on Riemann's line began  to be outweighed  by  even stronger reasons  why  zeros should start falling  off  the  line.  By  300  million  zeros,  Zagier  realised,  it  would  be  a  miracle  if  zeros weren't pushed off the line.

Zagier based his analysis on a graph he knew kept track of the behaviour of the gradient in the  hills   and  valleys  along  Riemann's  ley  line.  Zagier's  graph  represented   a  new perspective from which to view the cross-section of Riemann's landscape along the critical line.  What  was  interesting  was  that  it  facilitated  a  new  interpretation  of  the  Riemann Hypothesis.  If  this  new  graph  ever  crosses  Riemann's  critical  line, there  has  to  be  a  zero off Riemann's line in this region, making the Riemann Hypothesis false. To begin with, the graph is nowhere near the critical line, and in fact climbs away. But as one marches farther and farther north the graph starts coming down, edging towards the critical line. Every now and  again  Zagier's  graph  attempts to  crash  through  the  critical  line,  but  as  the  figure opposite illustrates something seems to be preventing it from crossing.

So, the farther north one advances, the more likely it seems that this graph might cross the critical  line.  Zagier  knew  that  the  first  real  weakness  was  around  the  300-millionth  zero. This  region  of  the  critical  line  would  be  the  real  test.  By  the  time  you  had  gone  this  far north,  if  the  graph  still  did  not  cross  the  critical  line,  then  there  really  had  to  be  a  reason why  it  didn't.  And  that  reason,  Zagier  reasoned,  would  be  that  the  Riemann  Hypothesis was true. And that is why Zagier set the threshold at 300 million zeros.

For  about  five  years  nothing  happened.  Computers  grew  slowly  more  powerful,  but  to compute  even  twice  as  many  zeros  let  alone  a  hundred  times  as  many  would  have required such a huge amount of work that nobody bothered. After all, in this business there was  no  point  expending  huge  amounts  of  energy  on  merely doubling  the  amount  of evidence.  But  then,  about  five  years  later,  computers  suddenly  got  a  lot  faster,  and  two teams  took  up  the  challenge  of  exploiting  this  new  power  to  calculate  more  zeros.  One team was in Amsterdam, run by Herman te Riele, and the second was in Australia, led by Richard Brent.

So  the  team  went  on  to  300 million.  Naturally  they  didn't  find  a  zero  off the line.

More  importantly  though,  the  evidence,  in  Zagier's  view,  was  now  overwhelmingly  in support of the Hypothesis. The computer as a calculating tool was finally powerful enough to navigate far enough north in Riemann's zeta landscape for there to be every chance of throwing up a counter-example. Despite numerous attempts by Zagier's auxiliary graph to storm  across  Riemann's  critical  line,  it  was  clear  that  something  was  acting  like  a  huge repulsive  force  stopping  the  graph  from  crossing  the  line.  And  the  reason?  The  Riemann Hypothesis.

If  the  Riemann  Hypothesis  is  false,  that  would  imply  that  the  prime number  coin  is  far  from  fair.  The  farther  east  one  finds  zeros  off  Riemann's  ley  line,  the more biased is the prime number coin. A  fair  coin  produces  truly  random  behaviour,  whereas  a  biased  coin  produces  a  pattern. The  Riemann  Hypothesis  therefore  captures  why  the  primes  look  so  random.  Riemann's brilliant insight had turned this randomness on its head by finding the connection between the zeros of his landscape and the primes. To prove that the primes are truly random, one has to prove that on the other side of Riemann's looking-glass the zeros are ordered along his critical line."

(from Music of the Primes)

The left hand side is a step function, and on the right hand side, somehow, the zeros of the zeta function (ρ) conspire at exactly the prime numbers to make that sum jump.

"It is worthwhile to note that |xρ| = |xR(ρ)|.

With this above representation of the prime counting function, we now have an explicit way of showing that the count of primes less than a certain number, our x, is dependent upon the zeros of the Riemann zeta function. Now we can see that if Riemann’s hypothesis is true, and all of the zeros ρ have real part equal to one half, then the sum above doesn’t have to worry about other values for |xρ|. So if Riemann’s hypothesis is true, understanding that sum becomes far easier than if it were not true and the values of R(ρ) jumped around all over the place. So, if Riemann’s hypothesis is actually true, then we have an explicit definition for the prime counting function, and should we wish to know the number of primes less than a given number, we can find that number with this function and no longer have to worry about an amount of error." (the "encoding" of the distribution of prime numbers by the nontrivial zeros of the Riemann zeta function) (Riemann's zeros and the rhythm of the primes)



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #357 on: January 17, 2017, 12:04:54 AM »

The sum of any two sides of a triangle is greater than the third side.

If any of the Riemann zeta function ζ(s) non-trivial zeros would lay off the critical line, s = σ + it, σ = 1/2 - ε, then the total length of the distance/segment alloted for the two zeta functions, 63.636363... or the choice of 200/π, would amount to more than 100 sacred cubits (0.63661977 = one sacred cubit).

The five elements sequence of proportions would be disrupted as the distance from the previous zero to the zero which is off the critical line, and from the zeta zero which finds itself on the σ = 1/2 - ε line to the next zero would be greater than the distances from that previous zero to the next two zeta zeros to be found on the critical line.

The average distance between two consecutive zeta zeros at t = 10100 is:

2π/(lnt/2π) = 0.0272

For a Lehmer pair the value of the distance would be even less than that.

Then, the delicate balance of the five elements proportions, based on the sacred cubit measurements, would be totally disordered: no zeros in that 100 sacred cubit segment could be any longer in their proper places if they do obey the five elements law of proportions.

Moreover, the same thing would occur with the zeros of the second zeta function wave which propagates in a direction opposite to that of the first zeta function wave.

This fundamental disruption of the five elements law of proportions would propagate/transmit itself all the way back to the very first zero of the zeta function: 14.134725.

In my opinion, the zeta function sequence of zeros on the very first segment contains all the necessary information, to be able to draw a definite conclusion about the Riemann hypothesis, if we can prove that indeed those first 18 zeros obey the five elements law of proportions.

Perhaps B. Riemann did notice that those first zeros obeyed some sort of law of proportions, which then led him to the hypothesis that most probably all the remaining zeros are located on the critical line.

« Last Edit: January 17, 2017, 09:02:08 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #358 on: January 17, 2017, 11:54:33 PM »

The subdivision of the 100 sc (200/π) line according to the law of proportions based on the five elements of the Gizeh pyramid (26.7, 53.4, 80, 136.1, 534) produces the zeros of the zeta function. These in turn are related directly to the distribution of the prime numbers.

Applying the five elements proportions to the sacred cubit distance:


If we only focus on the term corresponding to the 136.1 figure, we can divide the line in this manner:

63.63 - 16.1773 = 47.459

47.459, 12.066, 7.11825, 4.7459, 2.373

9.5445, 2.4266, 1.431675, 0.95445, 0.477225

9.5445 - 2.4266 = 7.118

12.066, 3.0676, 1.8099, 1.2066, 0.6033

12.066 - 3.0676 = 9

9, 2.288, 1.35, 0.9, 0.45

14.134725 + 7.118 = 21.252725

16.1773 - 9.5445 = 6.6328

6.6328, 1.68632, 0.99492, 0.66328, 0.33164

14.134725 + 9.5445 + 1.68632 = 25.365545

14.134725 + 16.1773 = 30.31

14.134725 + 16.1773 + 2.288 = 32.6

77.771025 - 12.066 = 65.705

These would represent just the first approximations, working up the 136.1 term; by adding the terms corresponding to the 80, 53.4 and 26.7 values, in the same law of five elements proportion, will provide further accuracy in obtaining the final figure.

At each point we would be dealing with a series of the form having similar terms to these:

63.66 - (63.66 x 2.5424)/10 ± (63.66 x 2.54242)/100 ± ...

(we can also start from the right, 14.134725 + ...)

I believe that the ± signs would follow this pattern:

Five elements recursion formula


1 + Δ1 = 3
3 + Δ2 = 5
5 - Δ3 = 2
2 + Δ4 = 4
4 - Δ5 = 1

Δ1 + Δ2 - Δ3 + Δ4  - Δ5

14.134725 x 4.5 = 63.6

4.5 x 2.5424 = 11.4408

286.1 = 450 sc

11.4408 = 4 x 286.1

45 = 5 x 9 (five elements x (seven tone musical scales + two intervals)); the enneagram is actually made up of seven notes and two intervals (see: )

1.618034 = 2.5424 x 1sc = 4sc2

The 63.66 segment can be divided according to the five elements law of proportions starting from the right (the first zeta function), and/or from the left side (the second zeta function).

Each and every zero of the zeta function is created by a subdivision of the 100 sc line into proportions based on the five elements sequence.

Each and every zero of the zeta function can be represented just in terms of the 14.134725 (the first zero and hfrustum of the Gizeh pyramid divided by ten, 63.66 (200/π), 2.5424, 286.1 values.

« Last Edit: September 06, 2017, 02:44:34 AM by sandokhan »



  • Flat Earth Sultan
  • Flat Earth Scientist
  • 7249
Re: Advanced Flat Earth Theory
« Reply #359 on: January 19, 2017, 12:34:52 AM »

Most mathematicians study Riemann's zeta function in a pure mathematical setting, without paying attention to the fact that the zeta function exists in a quantum physical perspective.

The correlation between the arrangement of the Riemann zeroes and the energy levels of quantum chaotic systems means that the zeta function can describe the very intricate quantum physics on an infinitesimal level.

We have seen already that the most interesting segment of the zeta function, which no one else has been paying any attention to, is the area of the graph situated between the first zeros (± 14.134725...):

"The RH and (5.3) imply that, as t → ∞, the graph of Z(t) will consist of tightly packed spikes, which will be more and more condensed as t increases, with larger and large oscillations. This I find hardly conceivable. Of course, it could happen that the RH is true and that (5.3) is not."

No physicist/mathematician asks himself/herself this question: if there will be more and more oscillations how then do these observations fit within the quantum physical model?

Nor do they understand that within the limited/infinitesimal volume of a boson, there will always be two zeta functions which will propagate in opposite directions, dividing the line into a sacred cubit fractal.

The two zeta function waves are sound waves which means we are dealing with the phenomenon called cymatics.

In my opinion, at the very point of reaching the extreme density of these sound waves (perhaps this will occur for T = e450sc, or T = e1000sc which might very well be the lower bound and, respectively, the upper bound of the Skewes number in a sacred cubit formulation), there will be a resetting of the entire system: extreme sound cymatics turns into stillness/silence (extreme yang leads to yin), and the entire process will start all over again. (sound/ether-stillness/silence/aether interplay)

If the zeta function is a sound wave, then we will have to deal with the concept of the extreme density of sound: cymatics. This is the reason why I believe that those spikes will not be allowed to get any larger. A clear distinction has to be made between studying the Riemann zeta function in a pure mathematical setting, and the study of the same function in a quantum physical context.

It is the law of proportions based on the five elements (26.7, 53.4, 80, 136.1, 534) which is the source of the apparently random distribution of the zeros of the zeta function, and which in turn leads to the known distribution of the prime numbers.

« Last Edit: January 19, 2017, 09:59:13 AM by sandokhan »