What Do We Mean by "Meaning"?
That is,
what makes something meaningful; in what does meaning consist, whether
in an artwork, a text, factual or other data, a life, etc.? I believe meaning resides in or arises from relatedness. I
arrived at that hypothesis in 1975 through study of John Milton’s Samson Agonistes. My formulation may sound almost banal,
but I’ve never found a better one.
When we first encounter anything new, it is at least partially meaningless
or unintelligible to us. Think about the first time you heard a foreign
language, or try to imagine the first time you saw the stars. Your
senses presented you with a wealth of raw data, but the data seemed
meaningless and chaotic. If immediately afterward, you’d been
asked to repeat one of the sentences spoken in the foreign language
(let alone interpret it), or to draw part of the starry sky (let alone
explain that each point of light was in fact a sun), you probably
wouldn’t have done very well. It was only as you became aware
of many interconnections within the raw data or between parts of it
and other contexts, that the data became both meaningful and memorable
(for example, when you learned to see the relationships among the
positions of the stars in terms of a "Big Dipper," etc.).
It’s human nature to try to make sense of things, so even if
there’s no one around to explain the unfamiliar to us, we usually
notice things about it, relationships among its components or between
it and the rest of our experience. Even if these relationships don’t
seem terribly meaningful to us at first, with repeated encounters,
patterns emerge and start to seem more familiar and more meaningful.
Most mnemonic devices consist in creating artificial relationships
that lend meaning to data that, at least at the time the mnemonic
is needed, otherwise seem more or less arbitrary and not to have any
necessary relationship to anything. The things that seem most
meaningful to us seem to me to be those in which the greatest volume
or intensity of relationships to other things interconnect.
A dictionary is a giant tautology: every word in
it is defined through reference to other words. But if the meanings
of words consist entirely in their connections to other words, which
themselves have meaning only because of their connections to other
words, how do we ever come to understand our first word? And is “meaning”
a meaningful concept at all?
People commonly believe that words refer to things, and are defined
through reference to those things, not just through reference to other
words. This belief seems quite correct as far as it goes; but the
same logical difficulty then arises as to things: what
is involved in becoming able to “read” or interpret things?
When individuals blind from birth first become able to see, I understand
the field of view seems completely chaotic and unintelligible to them.
They must learn how to interpret it. Such a person may be
helped by referring to other kinds of contexts that they already understand--words,
touch, etc.
But the same logical problem that we found with respect to words again
arises with respect to vision or any of our other senses. If we make
sense of the unfamiliar by relating it to the familiar, was there
some first, “a priori” knowledge that gave us
a starting-point, to which we directly or indirectly relate all our
subsequent experiences? Or are all our understandings of our sensory
impressions, like words in a dictionary, a giant tautology? It
seems that all cognition is to some extent recognition; or could it
be some kind of boot-strap operation without any ultimate antecedent
or ground?
One possible solution is to posit something like Platonic “Ideas,”
or perhaps Chomskian deep structures; some kind of innate set of ideas
or linguistic or other structure through reference to which we relate
our subsequent experiences in life and that provides the ground upon
which we are able to build new connections. (I have no first-hand
knowledge of Chomsky’s theories, which I understand are in any
event evolving, so please do not rely on my characterization of them.)
Another, somewhat different possibility is to suppose that what’s
innate is not any particular set of ideas, but rather a capacity to
carry out certain processes or algorithms to identify (or fabricate)
certain basic kinds of relationships, such as, for example, two such
relationships brilliantly analyzed by the philosopher, David Hume,
those of perceptual resemblance (i.e., this looks similar to that)
and of contiguity in space or time (i.e., this generally follows shortly
after that). See Hume’s "An Enquiry Concerning Human Understanding,"
at http://www.etext.leeds.ac.uk/hume/ehu/ehupbsb.htm#index-div2-N970891263 (if you remain unconvinced that simply saying words refer to things
fails to fully explain how we learn to interpret words, things, or,
for that matter, anything else, please read Hume’s essay; if
that does not convince you, nothing I can add here will).
When I was in college ca. 1975, I read about research on brain hemisphericity.
As you may know, the brain is divided into two hemispheres. The hemispheres
are usually in active communication with one another through a thick
rope of nerve fibers called the corpus callosum. In some people, that
connection has through one cause or another been severed. Studies
involving such persons helped to show a division of labor between
the two hemispheres. (As many of us now know, language and math seem
usually to be handled by the left hemisphere, while visual cognition
is handled by the right hemisphere.)
Because of the way our eyes are connected to the brain, it’s
possible to show objects to such a person in such a way that the person
can “see” the objects with only one hemisphere at a time.
In one such study that caught my attention, each subject was shown
a collection of objects which, as I recall, included a cigarette,
matches, and a piece of chalk, among other things. The subjects were
asked to group the objects that belonged together. When the subjects
saw the objects with their left hemispheres, they put together the
cigarette and matches—grouped by function. When the subjects
saw the objects with their right hemispheres, they put together the
cigarette and chalk—grouped by likeness in appearance.
It would be a leap to conclude that human minds recognize no other
kinds of relatedness, but it does seem that two important
types are: (1) relatedness based on function, use, purpose,
causation, or more fundamentally, a close or at least predictable
contiguity or conjunction in space or time, or (2)
relatedness based on similarity or contrast in appearance or other
perceptible attributes, regardless of location in space or
time (this characterization of course begs another big question, of
how we determine similarity; that’s one digression I’ll
resist for now).
Perhaps there need be no ultimate ground upon which we build our understandings
of things; perhaps, rather, all meaning is tautological but just doesn’t
feel that way because we’ve become comfortable within the webs
of connections we’ve built. Maybe a concept such as causation
consists entirely in our accumulated associations to the word, and
our experience of causation is derived merely from our capacity to
notice contiguity. When we were in the womb, we had no idea where
we were or even of our own ignorance. Most of us have no recollection
of what that was like. Perhaps in a similar way, we’ve forgotten
that there IS no ultimate ground for everything that now seems intelligible
and meaningful to us. There was a time when almost everything seemed
strange, but now it doesn’t.
I think all languages and other systems of symbols or forms of expression
are ultimately tautological--but that that's ok, because somehow,
to at least a large degree, they seem to work.
An important correlative to the idea that meaning resides in relatedness
is that, theoretically, the meaning of any word or thing can
be fully understood only if and when all of its connections
and relationships to other words and things are understood. These relationships include those embedded in the precise, actual
context in which the word or thing appears. For example, the full
meaning of any particular word as used in any particular instance
could be understood only if one had not only read its O.E.D.
definition as well as any encyclopedia entries on it, but one had
also considered, among other things, every linguistic, psychological,
physical, historical and other detail of the context in which the
word was used. We should soon realize that it must in fact be impossible
ever to fully exhaust the meaning of anything--that since everything
is connected to everything else, to state the whole meaning of anything
could be done only by a god re-speaking the entire universe.
Similarly, I think when we talk about the meaning of an art work,
we can acknowledge that the work always refers to things outside itself
in various ways and degrees, even if those relationships are tenuous
or disguised, and that at the same time, the work can also be meaningful
partly or even primarily based on relationships contained within the
work itself.
Why does it matter how meaning is derived at all?
I think it matters because it is helpful to understand the extent
to which meaning is not determined and fixed, for all time
and in all contexts. To realize that there is no one meaning
that is always and completely the correct, “right” meaning. (As discussed further in the essay on this website entitled, "What
Can We Know," that doesn’t mean that we should throw up
our hands and declare all interpretations to be of equal value for
all purposes.) (And it is helpful at the same time to realize that,
within any particular context, each particular articulation carries
a distinct meaning, which may be more useful or true, or less so,
than that of another particular articulation.)
An area of inquiry also interesting to consider in connection with
the subject of meaning is information theory. According to this theory,
information consists in those symbols that are uncertain in
the sense that they could not be interpolated or predicted by the
recipient. For example, the old ad along the lines of, “if
u cn rd ths, u cd b a secr’y” shows that the omitted letters
are relatively unimportant in order for the reader to decipher the
writer’s intended meaning. According to this theory, uncertainty
is the essence of information. In the world of telecommunications,
theorists say, “[t]he amount of information, or uncertainty,
[expressed] by an information source is a measure of its entropy.
In turn, a source's entropy determines the amount of bits per symbol
required to encode the source's information”, and “[t]he
complexity of the code chosen is determined by the number of possible
symbols needed to transmit the information” (see http://www.lucent.com/minds/infotheory/what1.html ). See the essay
on this site entitled The Arts and
Literature for my thoughts on how this theory might weigh in attempting
to judge the relative merit of artworks.
Stated another
way, it takes more
data to describe a more chaotic system, less data to describe a more
ordered system. There is an inverse relation between information and
syntropy.
(Proceed to the next Essay, The Meaning of Life, or . . .