Wednesday, December 8, 2010

The physicist who tells us... “black holes exist” is [saying] something's wrong with the laws of modern physics

Amplify’d from www.reciprocalsystem.com
The Retreat From Reality
This development arrives at a picture of the place of electricity in the
physical universe that is totally different from the one that we get from
conventional physical theory.
This finding that an entire subdivision of accepted physical
theory is not valid is difficult for most scientists to accept, particularly
in view of the remarkable progress that has been made in the application
of existing theory to practical problems. But neither a long period of
acceptance nor a record of usefulness is sufficient to verify a theory.
The history of science is full of theories that enjoyed general acceptance
for long periods of time, and contributed significantly to the advance
of knowledge, yet eventually had to be discarded because of fatal defects.
Present-day electrical theory is not unique in this respect; it is just
another addition to the long list of temporary solutions to physical problems.

None of the other basic entities of the physical universe–about
six or eight of them, the exact number depending on the way in which the
structure of fundamental theory is erected–is much, if any, better known
than the electric charge. The nature of time, for instance, is even more
of a mystery. But these entities are the foundation stones of physics,
and in order to construct a physical theory it is necessary to make some
assumptions about each of them. This means that present-day physical theory
is based on some thirty or forty assumptions about entities that are almost
totally unknown.


Obviously, the probability that all of these assumptions
about the unknown are valid is near zero. Thus it is practically certain,
simply from a consideration of the nature of its foundations, that the
accepted structure of theory contains some serious errors.

In addition to the effects of the lack of understanding
of the fundamental entities of the physical universe, there are some further
reasons for the continued existence of errors in conventional physical
theory that have their origin in the attitudes of scientists toward their
subject matter. There is a general tendency, for instance, to regard a
theory as firmly established if, according to the prevailing scientific
opinion, it is the best theory of the subject that is currently available.
As expressed by Henry Margenau, the modern scientist does not speak of
a theory as true or false, but as “correct or incorrect relative to a
given state of scientific knowledge.”64


One of the results of this policy is that conclusions
as to the validity of theories along the outer boundaries of scientific
knowledge are customarily reached without any consideration of the cumulative
effect of the weak links in the chains of deductions leading to the premises
of these theories. For example, we frequently encounter statements similar
to the following:




The laws
of modern physics virtually demand that black holes exist.
65
No one who accepts general relativity has found any way to escape the
prediction that black holes must exist in our galaxy.
66



These statements tacitly assume that the reader accepts
the “laws of modern physics” and the assertions of general relativity
as incontestable, and that all that is necessary to confirm a conclusion–even
a preposterous conclusion such as the existence of black holes–is to verify
the logical validity of the deductions from these presumably established
premises. The truth is, however, that the black hole hypothesis stands
at the end of a long line of successive conclusions, included in which
are more than two dozen pure assumptions.

The age of electricity began with a series of experimental
discoveries: first, static electricity, positive* and negative*, then
current electricity, and later the identification of the electron as the
carrier of the electric current. Two major issues confronted the early
theorists, (1) Are static and current electricity different entities,
or merely two different forms of the same thing?, and (2) Is the electron
only a charge, or is it a charged particle? Unfortunately, the consensus
reached on question (1) by the scientific community was wrong. The theory
of electricity thus took a wrong direction almost from the start. There
was spirited opposition to this erroneous conclusion in the early days
of electrical research, but Rowland’s experiment, in which he demonstrated
that a moving charge has the magnetic properties of an electric current,
silenced most of the critics of the “one electricity” hypothesis.

The issue as to the existence of a carrier of electric
charge–a “bare” electron–has not been settled in this manner. Rather,
there has been a sort of a compromise. It is now generally conceded that
the charge is not a completely independent entity. As expressed by Richard
Feynman, “there is still ‘something’ there when the charge is removed.”67
But the wrong decision on question (1) prevents recognition of the functions
of the uncharged electron, leaving it as a vague “something” not credited
with any physical properties, or any effect on the activities in which
the electron participates. The results of this lack of recognition of
the physical status of the uncharged electron, which we have now identified
as a unit of electric quantity, were described in the preceding pages,
and do not need to be repeated.
In Aristotle’s physical system, which
was the orthodox view of the universe for nearly two thousand years, it
was assumed that the planets were attached to transparent spheres that
rotated around the earth. But according to the laws of motion, as they
were understood at that time, this motion could not be maintained except
by continual application of a force. So Aristotle employed the same device
that his modern successors are using: the ad hoc assumption. He postulated
the existence of angels who pushed the planets along in their respective
orbits. The “nuclear force” of modern physics is the exact equivalent
of Aristotle’s “angels” in all but language.

With the benefit of the additional knowledge that has
been accumulated in the meantime, we of the present era have no difficulty
in arriving at an adverse judgment on Aristotle’s assumption. But we need
to recognize that this is an illustration of a general proposition. The
probability that an untestable assumption about a physical entity or phenomenon
is a true representation of physical reality is always low. This is an
unavoidable consequence of the great diversity of physical existence.
When one of these untestable assumptions is used in the ad hoc manner–that
is, to evade a discrepancy or conflict–the probability that the assumption
is valid is much lower.

The reason for this can easily be seen if we consider
the way in which the probability of validity is affected. Because of the
complexity of physical existence mentioned earlier, the probability that
an untestable assumption is valid is inherently low. In each case, there
are many possibilities to be conceived and taken into account.
If each assumption of this kind has an even chance (50 percent) of being
valid, there is some justification for using one such assumption in a
theory, at least tentatively. If a second untestable assumption is introduced,
the probability that both are valid becomes one in four, and the use of
these assumptions as a basis for further extension of theory is a highly
questionable practice. If a third such assumption is added, the probability
of validity is only one in eight, which explains why pyramiding assumptions
is regarded as unsound.

The following comment by Abraham Pais is appropriate:

Despite much progress, Einstein’s earlier
complaint remains valid to this day. “The theories which have gradually
been associated with what has been observed have led to an unbearable
accumulation of individual assumptions.”
69


Of course, it is possible for an assumption to be upgraded
to the status of established knowledge by discovery of confirmatory evidence.
This is what happened to the assumption as to the existence of atoms.
The present uncritical acceptance of the nuclear atom-model is not a result
of more empirical support, but of increasing familiarity, together with
the absence (until now) of plausible alternatives. A comment by N. R.
Hanson on the quantum theory, one of the derivatives of the nuclear atom
model, is equally applicable to the model itself. This theory, he says,
is “conceptually imperfect” and “riddled with inconsistencies.” Nevertheless,
it is accepted in current practice because “it is the only extant theory
capable of dealing seriously with microphenomena.”70

The finding that the nuclear atom-model rests on false
premises does not necessarily invalidate the currently accepted mathematical
relationships derived from it, or suggested by it. This may appear
contradictory, as it implies that a wrong theory may lead to correct answers.
However, the truth is that the conceptual and mathematical aspects of
physical theories are, to a large extent, independent. As Feynman puts
it, “Every theoretical physicist who is any good knows six or seven different
theoretical representations for exactly the same physics.”74
Such a physicist recognizes this many different conceptual explanations
that agree with the mathematical relations. A major reason for this is
that the mathematical relations are usually identified first, and an explanation
in the form of a theory is developed later as an interpretation of the
mathematics. As noted earlier, many such explanations are almost
always possible in each case. In the course of the investigation on which
this present work is based, this has been found to be true even where
the architects of present-day theory contend that “there is no other way.”

This is what has happened as a result of the assumptions
that were made in the course of developing the nuclear atom-model. Once
it was assumed that the atom is composed primarily of oppositely charged
particles, and some valid mathematical relations were developed and expressed
in terms of this concept, the prevailing tendency to accept mathematical
agreement as proof of validity, together with the absence (until now)
of any serious competition, elevated this product of multiple assumptions
to the level of an accepted fact. “Today we know that the atom consists
of a positively charged nucleus composed of protons and neutrons surrounded
by negatively charged electrons.” This positive statement, or its equivalent,
can be found in almost every physics textbook. But any proposition that
rests on assumptions is hypothesis, not knowledge. Classifying a model
that rests upon more than a dozen independent assumptions, mostly untestable,
and including several of the inherently dubious “ad hoc” variety, as “knowledge”
is a travesty on science.

When the true status of the nuclear atom-model is thus
identified, it should be no surprise to find that the development of the
theory of the universe of motion reveals that the atom actually has a
totally different structure. We now find that it is not composed
of individual particles, and in its normal state it contains no electric
charges. This new view of atomic structure was derived by deduction from
the postulates that define the universe of motion, and it therefore participates
in the verification of the Reciprocal System of theory as a whole. However,
in view of the crucial position of the nuclear theory in conventional
physics it is advisable to make it clear that this currently accepted
theory is almost certainly wrong, on the basis of current physical
knowledge,
even without the additional evidence supplied by the present
investigation, and that some of the physicists who were most active in
the construction of the modern versions of the nuclear model concede that
it is not a true representation of physical reality. This is the primary
purpose of the present chapter.

The magnitudes of the basic physical properties extend
through a much wider range in the astronomical field than in the terrestrial
environment. A question of great significance, therefore, in the study
of astronomical phenomena, is whether the physical laws and principles
that apply under terrestrial conditions are also applicable under the
extreme conditions to which many astronomical objects are subjected. Most
scientists are convinced, largely on philosophical, rather than scientific,
grounds, that that the same physical laws do apply throughout the
universe. The results obtained by development of the consequences of the
postulates that define the universe of motion agree with this philosophical
assumption. However, there is a general tendency to interpret this principle
of universality of physical law as meaning that the laws that have
been established as applicable to terrestrial conditions are applicable
throughout the universe.
This is something entirely different, and
our findings do not support it.

The error in this interpretation of the principle stems
from the fact that most physical laws are valid, in the form in which
they are usually expressed, only within certain limits. Many of the currently
accepted laws applicable to solids, for example, do not apply at temperatures
above the melting points of the various material substances. The prevailing
interpretation of the uniformity principle carries with it the unstated
assumption that there are no such limits applicable to the currently accepted
laws and principles other than those that are recognized in present-day
practice. In view of the very narrow range of conditions through which
these laws and principles have been tested, this assumption is clearly
unjustified, and our findings now show that it is definitely incorrect.
We find that while it is true that the same laws and principles are applicable
throughout the universe, most of the basic laws are subject to certain
modifications at critical magnitudes, which often exceed the limiting
magnitudes experienced on earth, and are therefore unknown to present-day
science. Unless a law is so stated that it provides for the existence
and effects of these critical magnitudes, it is not applicable
to the universe as a whole, however accurate it may be within the narrow
terrestrial range of conditions.

One property of matter that is subject to an unrecognized
critical magnitude of this nature is density. In the absence of thermal
motion, each type of material substance in the terrestrial environment
has a density somewhere in the range from 0.075 (hydrogen) to 22.5 (osmium
and iridium), relative to liquid water at 4° C as 1.00. The average density
of the earth is 5.5. Gases and liquids at lower densities can be compressed
to this density range by application of sufficient pressure. Additional
pressure then accomplishes some further increase in density, but the increase
is relatively small, and has a decreasing trend as the pressure rises.
Even at the pressures of several million atmospheres reached in shock
wave experiments, the density was only increased by a factor of about
two. Thus the maximum density to which the contents of the earth could
be raised by application of pressure is not more than about 15.

The density of most of the stars of the white dwarf class
is between 100,000 and 1,000,000. There is no known way of getting
from a density of 15 to a density of 100,000. And present-day physics
has no general theory from which an answer to this problem can
be deduced. So the physicists, already far from the solid ground of reality
with their hypotheses based on an atom-model that is “only a symbol,”
plunge still farther into the realm of the imagination by adding more
assumptions to the sequence of 14 included in the nuclear atom-model.
It is first assumed (15) that at some extremely high pressure the hypothetical
nuclear structure collapses, and its constituents are compressed into
one mass, eliminating the vacant space in the original structure, and
increasing the density to the white dwarf range.

How the pressure that is required to produce the “collapse”
is generated has never been explained. The astronomers generally assume
that this pressure is produced at the time when, according to another
assumption (16), the star exhausts its fuel supply.


With its fuel gone it [the star] can no longer generate
the pressure needed to maintain itself against the crushing force of
gravity.
75


But fluid pressure is effective in all directions; down
as well as up. If the “crushing force of gravity” is exerted against a
gas rather than directly against the central atoms of the star, it is
transmitted undiminished to those atoms. It follows that the pressure
against the atoms is not altered by a change of physical state due to
a decreasee in temperature, except to the extent that the dimensions of
the star may be altered. When it is realized that the contents of ordinary
stars, those of the main sequence, are already in a condensed state (a
point discussed in detail in Volume III), it is evident that the change
in dimensions is too small to be significant in this connection. The origin
of the hypothetical “crushing pressure” thus remains unexplained.

This line of thought that we have followed from the physicists’
concept of the nature of electricity to the nuclear model of atomic structure,
and from there to the singularity, is a good example of the way in which
unrestrained application of imagination and assumption in theory construction
leads to ever-increasing levels of absurdity–in this case, from atomic
“collapse” to degenerate matter to neutron star to black hole to singularity.
Such a demonstration that extension of a line of thought leads to an absurdity,
the reductio ad absurdum, as it is called, is a recognized logical
method of disproving the validity of the premises of that line of thought.
The physicist who tells us that “the laws of modern physics virtually
demand that black holes exist” is, in effect, telling us that there is
something wrong with the laws of modern physics. In the preceding pages
we have shown just what is wrong: too much of the foundation of conventional
physical theory rests on untestable assumptions and “models.”

Read more at www.reciprocalsystem.com
 

No comments:

Post a Comment