The Denial of Art in Science

31 Flares Twitter 21 Google+ 0 Facebook 7 LinkedIn 3 Email -- 31 Flares ×

Finch: It was strange.  I suddenly had this feeling that everything was connected. It was like I could see the whole thing…one long chain of events that stretched all the way back before Lark Hill. I felt like I could see everything that had happened, and everything that was going to happen. It was like a perfect pattern laid out in front of me, and I realized that we are all part of it, and all trapped by it.

Dominic: So do you know what’s going to happen?

Finch: No, it was a feeling.  But I can guess…

- V for Vendetta

A couple days ago I watched V for Vendetta for something like the 500th time.  It is one of those brilliant movies I can watch over and over again, and notice something new each time (including a few flaws here and there).  This time what struck me was the subtly implied dialectic between artistic sentimentality – characterized by V among others – and the coldly rationalistic orientation characterized by the villains, primarily Creedy and Suttler.

The plot of the movie is revealed through the investigations of Chief Inspector Finch, as he gradually uncovers the connections between V’s personal history and the rise to power of the authoritarian Norsefire party.  Despite being a party member, Finch is identifiable as a protagonist by his embrace of intuition and narrative, as in the quote above.  The contrast is particularly clear in the following exchange:

Nameless Yes-man: Chancellor, I know no one seems to want to discuss this but, [deep breath] if we are to be prepared for any eventuality, then it can’t be ignored any longer.  The red report in front of you has been vetted by several demolition specialists.  Now it concludes that the most logical delivery system for the terrorist to use would be an airborne attack.

A seperate report has been filed suggesting a train, despite the fact that the tunnels surrounding parliament have been sealed shut.

Chancellor Suttler: [ominously] Who filed that report?

Nameless Yes-man: Chief Inspector Finch.

Chancellor Suttler: Do you have any evidence to support this conclusion Mr. Finch?

Finch: No sir, it’s just a feeling.

Chancellor Suttler: [menacing] If I am sure of anything Inspector Finch it is that this government will not survive if it is to be subject to your feelings.

In this exchange the line is drawn.  The unlikely protagonist follows his internal compass while the villain berates him for lack of evidence.  This is a curious dichotomy, and one I am beginning to recognize more frequently in the zeitgeist.  It is not only villainous politicians who express this attitude.  The intellectual community offers it own flavor in the form of empirical extremism.

They blithely dismiss any information that is not sourced from double-blind placebo controlled studies.  When presented with new ideas, the empirical extremist reflexively denigrates any propositions he deems untestable.  Conjectures that have not been tested, or that are not yet testable, are dismissed as unscientific.

We see this accusation levied today at ongoing theoretical endeavors like string theory.  Wikipedia notes:

 Opponents, such as David Gross, suggest that the idea is inherently unscientific, unfalsifiable or premature.

My perusals through wikipedia revealed fewer criticisms of this sort than I expected, so perhaps this debate is maturing within the academic community.  Nevertheless, such criticisms are alive and well in online discussion forums and in popular science culture.  Consider for example the worldview implied by the xkcd comics below.  First, endorsing half-baked empiricism:

Then mocking incomplete theory:

Similarly, I recently began reading Design in Nature: How the Constructal Law Governs Evolution in Biology, Physics, Technology, and Social Organization by Adrian Bejan.  [Check out John Hagel’s thorough review for a nice summary of the constructal law.]  A quick glance at the associated wikipedia page reveals criticisms almost identical to those above:

The main criticism of constructal theory is that its formulation is vague.  The constructal law states that “For a finite-size system to persist in time (to live), it must evolve in such a way that it provides easier access to the imposed currents that flow through it”, but there is neither a mention of what these “currents” are nor an explicit definition of what “providing easier access” means. As a result, constructal theory is very versatile, but often unconvincing: depending on the choices made for the currents and the “access” to them, it can lead to extremely different results.

My intent here is not to start a debate about the validity of either theory (though admittedly I am intrigued by constructal theory).

I am more interested in the form of the criticisms themselves.  Though such skepticism may seem justified when presented in isolation, it is far more tenuous when considered in appropriate context.

The history of science is replete with “unscientific” ideas that were only gradually developed into empirically testable theories through long periods of speculative development.


When studying any complex phenomenon it is inevitable that researchers will intuit certain insights prior to formulating testable hypotheses.  When the subject matter in question is a candidate unified theory of everything (string theory) it is not unreasonable to expect that formulating testable hypotheses might take many years.

Albert Einstein published his first paper on the theory of special relativity in 1905.  In a 1907 publication he predicted gravitational time dilation (the notion that time slows down for particles/objects moving near the speed of light), but that hypothesis would not be testable for many years.  In a 1911 publication on general relativity Einstein predicted the gravitational deflection of light.   The first evidence confirming that prediction would not be produced until 1919.

Surely we don’t want to claim that Albert Einstein was dabbling in pseudoscience in those 15 years before the experimentalists caught up with his theory?

But if we accept the standards implied above, we would have to conclude that indeed he was.  He was promulgating what appeared to be an untestable theory, and the initial skepticism with which relativity was received reflected that assessment.  Case in point, in 1921 Einstein was granted the Nobel prize, not for his work on relativity, but rather for his work on the photoelectric effect, as relativity was still considered controversial.

Einstein’s example makes clear the shortcoming of the strict empiricist attitude.  That attitude is not incorrect so much as it is myopic.

In short, it denies the role of art in science.  It denies the importance of intuition and insight.  It presumes that all “scientific” thinking must be conducted bottom-up, ignoring the role of top-down observation in guiding empirical experimentation.

The Role of Art in Science

It should not be surprising that the counterpoint to myopic rationalism is to be found in the arts rather than the sciences.  As Evey says in V for Vendetta – “Artists use lies to tell the truth.”  Ideally, science would use the truth to tell the truth…but unfortunately the path to objective truth is not always readily apparent.  Sometimes we need to catch a glimpse of subjective truth in order to orient the search for objective truth.

Those glimpses of subjective truth are the domain of art.  The translation of  subjective truth into formal hypothesis is the domain of the theoretician.  If the world were simple, such translation would not be necessary.  Fortunately, the world is not simple.  It is beautiful and complex and offers plenty of fodder for theoreticians to play with.

To ignore the existence of complex problems in need of theoretical translation is the epitome of irrationality.  The dogmatic empiricists, who demand evidence before opening their minds to any new possibility, remain willfully ignorant of the labyrinthine nature of reality.  They are choosing to restrict their vision to only the reductive world quantifiable with current  tools and technologies.  Historian Jacob Burckhardt described their folly thusly:

The essence of tyranny is the denial of complexity.

I can do no better Burckhardt so I will leave it there for today…

photo courtesy of brandoncripps

Save & Bookmark

  • Daniel Lemire

    Well, I think there is a difference between non-predictive, non-falsifiable theories and non-sourced theories.

    If you think people might use the train tunnels, it can turn out that you areproven right because you made a prediction and it happened.

    String theory fails to make such meaningful predictions.

    • GregoryJRader

      That is the wrong analogy.  The train tunnel prediction is a hypothesis without a well developed theory behind it.  It has only an intuition, or as Finch says – a feeling, backing it.  The validation of that hypothesis does not prove a theory but rather that Finch is operating from a reliable mental model.  
      In order to validate a theory Finch would need to externalize and formalize the mental model that is producing those accurate predictions.  Until that point, there is no *empirical* connection between the accurate prediction and Finch’s informal mental model.

      The situation is the same with regard to string theory.  String theorists could make predictions of the qualitative sort that Finch has made, but those predictions do little to validate (or discredit) the theory until it is sufficiently formalized.  

      So yes, string theory to date has failed to make such meaningful predictions.  And it is possible that it might not ever make such predictions. String theory might prove to be a dead end.  But that fact doesn’t make it unscientific, it merely makes it pre-scientific…or more accurately pre-empirical.  

      • Daniel Lemire

         Fair enough, but you make it sound as if string theory is barely tolerated. The reverse is true. String theory has pretty much eclipsed all alternatives. Moreover, it is excessively well developed. It is nowhere near a hunch… it is a *very* sophisticated pseudo-theory.

        The same thing happened for decades with classical AI. A lot of math. heavy formalism… And wild claims that, somehow, expert systems would take over… but without a shred of falsifiability. It pushed aside most other research endeavors in Computer Science for a couple of decades. Thankfully, it is now far smaller… but still huge (the Semantic Web is another brain dead initiative to resurrect expert systems).

        In both cases, you have an abundance of theory and formalism… a lot of funding… and something that takes over much of the resources… but it is not real science…

        It is cargo cult science… it looks like science, but it is not…

        I don’t think cargo cult science can be considered art.

        • GregoryJRader

          I replied to most of the points here in my comment above.  I would just add that in order to properly evaluate these sorts of issues we need a notion of risk/reward in scientific endeavors.  I would attribute the failings you note more to the high risk nature of AI research than to the theoretical approach itself.  The same criticism could be applied huge government sponsored medical studies, costing billions of dollars, that collect empirical data from thousands of patients.  

          How do you know you are collecting the right data if you are not operating from sound theory? (emphasis on *sound* theory, a collections of willy nilly null hypotheses does not imply a sound theoretical justification for the experiment)

          Such studies are essentially data mining boondoggles.  They are not intrinsically unscientific if conceived of in the proper context, but they are exorbitantly wasteful and, given the risk/reward profile, certainly should not be sponsored by the government.  

  • Erik

    As Daniel says, there is a huge difference between tested and testable. Lack of the former is not a point in a theory’s favor, but lack of the latter means the theory is meaningless (barring some special cases blabla I have no motivation to hedge for in writing).

    • GregoryJRader

      The key point of contention here is what we mean by terms like “meaningless” or “unscientific”.  The implication I take from your phrasing is that such theoretical endeavors are useless, pointless, and should be abandoned.  If that is not the intent then I’m not sure why anyone would persist in asserting these arguments, as no one is debating the eventual need for empirical validation.

      Are *currently* untestable theoretical projects pointless/useless?

      In order to maintain that position you need to assert some arbitrary standard regarding the pace of theoretical progress.  It should be self-evident that insight does not spring from the mind in the form of fully formulated quantitative theory.  So at what point can a theoretical project be labeled unscientific?  After five years?  Ten years?  Twenty years?

      • Daniel Lemire

         “Are *currently* untestable theoretical projects pointless/useless?”

        There are not pointless. They are damaging. One professor pursues it, convinces graduate students to pursue it, they in turn become professors… they pursue it, and so on… meanwhile, they get funded, they get jobs… all those things are funded by other people… maybe the tax payers… so it is money that could serve useful purposes, maybe fund real science… and instead it goes to waste. It can setback science by decades.

        Listen… I don’t care if on your own time your pursue string theory. This is fine. But if you go and get 1m$ from the government to pursue string theory, and the next guy does the same and so on… and so on, anyone who is not doing string theory can’t get funding… well, then it is a different story.

        “It should be self-evident that insight does not spring from the mind in the form of fully formulated quantitative theory.”

        Science is the idea that opinions and gut feelings are ultimately worthless. They are not worth discussing. So if it is all you have… then you are not doing science. Maybe what you are doing is fine… maybe you are having sex with a girl and it is beautiful… or maybe you are writing a great novel… congratulations… but these things are not science.

        By losing sight of what science is, then we risk losing the lessons that it teaches. One of them is that a group of people should be, on their authority, enforce a belief. Truth should be verifiable.

        Science is one of the things that allowed Europe, a failed backwater place, to nearly take over the world in about 200 years…

        It is very important… you can’t just have opinions about things… you need to build your ideas on a verifiable foundation… so that others don’t need to trust you, they can verify for themselves that you are correct.

        This is very important.

        How do you know that string theory is correct? You don’t. All you know is that countless physicists are backing it, maybe the majority of theoreticians. But that’s an authority-based truth.

        BTW climate science is falling into the same trap right now. We have authority-based knowledge about global warming.

        This is very damaging.

        • GregoryJRader

          Ok, lots of good stuff in this comment that might help us parse this out a bit further…

          On government funding:
          Here I absolutely agree with you.  In the context of certain sorts of funding it makes sense to apply the strictest standards for empirical science.  The government is the bluntest tool in the funding arsenal and – as you note – has the undesirable potential to artificially entrench an insufficiently validated scientific paradigm.  As with most public policy, the benefits of government involvement outweigh the costs only when applied to the surest of sure things.  In the domain of scientific funding that obviously demands a high degree of empirical validation (or potential for empirical testing in the short term).  

          However, we will want to distinguish this issue from questions about what activities are intrinsically worth pursuing outside the context of government involvement.

          “By losing sight of what science is, then we risk losing the lessons that it teaches. One of them is that a group of people should be, on their authority, enforce a belief. Truth should be verifiable.”

          I assume you intended to say “should not be able to” or similar.  Again, I agree.  The ultimate goal of any scientific endeavor, theoretical or empirical, is to arrive at verifiable truth.  Any time we grant authority to a given viewpoint beyond its degree of validation, we are setting ourselves up for unintended consequences.  

          To reiterate, I am not arguing that string theory should be accepted as validated theory.  It is what it is – essentially a theoretical framework still very much in the process of development.  Rather, I am considering whether the approach taken by string theorists is “reasonable” (read: potentially productive/generative), setting aside for a moment the ways in which outside forces (gov’t funding) pervert that process.  

          Obviously I think it is a reasonable approach and I think we need a better language with which to distinguish reasonable theoretical enterprise from the actual pseudoscience that obviously has no intention of ever arriving at verifiable truth.  (see next point) 

          “It is very important… you can’t just have opinions about things… you need to build your ideas on a verifiable foundation… so that others don’t need to trust you, they can verify for themselves that you are correct.”

          Here is where we diverge and I am genuinely curious whether we can reconcile our viewpoints.  In the highlighted section I think you are confusing process with principle.  In principle “science” is the human enterprise to discover mutually verifiable truth…you will get no argument from me there.  But that principle does not imply a necessary directionality to the scientific process.  I will instead offer the following proposal:

          In order to be considered complete, a scientific truth must consist of two components – a “what?” consisting of empirical fact, and a “why/how?” consisting of theoretical explanation.  Neither is inherently more important than the other; empirical observation without explanatory theory is just a collection of data.  Likewise, neither one need necessarily proceed the other (any such notion would ultimately lead to nonsensical chicken/egg debates).  

          Therefore, “science” may proceed either top-down or bottom-up.  The empirical approach represents the latter.  The theoretical approach represents the former.  In either case, the loop must be closed in order to be considered real science.  Empirical science must actually proceed upward towards explanatory theory if it is to be considered complete.  Theory must actually proceed downward towards testable prediction (and actual testing).  Either one is unscientific in isolation, i.e. data mining (in isolation) is just as unscientific as groundless theorizing (in isolation).

          Some individuals will naturally be more comfortable and capable with bottom-up cognition and therefore will gravitate in that direction; others likewise with top-down cognition.  Both perspectives are necessary and generative.  

          Each approach should be judged not on the basis of whether the loop is currently closed, but on the basis of whether the process is moving in the right direction.  String theory should not be evaluated on the basis of its current predictions (or lack thereof), but rather on whether it is developing towards a stage of development that will allow prediction.  (I can see why some people might think such progress is not being made, though I doubt many of the critics are qualified to make such judgments.)  Similarly, I would not want to dismiss the value of overtly bottom-up approaches.  Such endeavors might have their place and are merely incomplete until explanatory theory is formulated.

          Hopefully you can find some common ground in the above.  I am working hard here to reconcile what I see as two highly divergent (though subtle) intrinsic perspectives ;)

  • alex ragus

    There’s a big difference between testable and provable.  People act on intuition all the time.  Those actions succeed or fail.  This is a test of the intuition, but doesn’t constitute proof of anything.  (even after the fact, it could have been random).  As a data point, that test is only useful to the individual in judging his own intuitive capabilities.

    Once you take a wider perspective on the mind-bogglingly complex field of psychology, you begin to notice patterns behind statements like these: 

    “Science is the idea that opinions and gut feelings are ultimately worthless.”

    Again, I’ll point you to some ran prieur, the top of this is a quick and dense read:

    A quote applies here and I forget the source and exact wording, but it went something like: “In science, any method is valid for discovery, but only the scientific method is valid for confirmation.”

    There’s also a huge contradiction in Daniel’s comment about “authority-based knowledge.”  No one is capable of testing all the theories of science themselves, so all scientists must take most experimental results on authority.  I abandoned laboratory physics in college once I began to imagine how many unforeseen variables might affect experimental results–either for or against universal confirmation. 

    Not inclined to argue this time, so this will be my only comment ;)

    • GregoryJRader

      With regard to the Ran Prieur essay, I found the original version (which I read first) to be far too strident and naive for my taste.  The 2011 update was far more agreeable.

      I like the quote:

      “In science, any method is valid for discovery, but only the scientific method is valid for confirmation.” 

      I don’t want to dispute the importance of scientific confirmation.  There is obvious value in having a set of information that has been confirmed in certain consistent methodological ways.  But I see far too many people disputing the first part of that quote…presuming that discovery must necessarily be conducted through the same methods, and that any other discovery process is invalid.  

      With regard to your last point, you might be interested in “the decline effect”: 

  • NigelReading|ASYNSIS

    Hi Greg,
    Nice to connect – I’m part of the Asynsis-Constructal team (working directly with Professor Bejan and his colleagues to promote sustainable design and development in the built environment). His work is on the behaviours, mine on the geometries, if you like.
    I liked your thoughtful and well-referenced take on our work. I think you are just about spot-on with your analysis of where we are at with this new field of complexity science – which yes, requires more modelling, more intuition, more “art” than the usual sciences. This holism is required for seeing the big picture, rather than reductionism, which can follow as the field bifurcates and narrows to specific problems. It has taken us 15-20 years to get to this stage (our research was independently published in 1995-6), so please bear with us. And yes, I assure you, we are the real (spiral-shaped) deal.

    A new optimising, geometric design law of nature in the Asynsis principle-Constructal law, a new paradigm that will serve to promote in enviro-extremis, the sustainable design and development agenda in China and the world beyond.

    This dynamical symmetrical behaviour is a temporal signature of energy, mass and information flow, of nature self-organising itself more easily over time, for less energy cost. The reason we see the ratio in statics so often is because it is an artefact, a remnant of an archetypal, emergent dynamical process; revealing optimal geometries in natural systems. So one could argue that to best preserve nature – we had also best (& urgently), emulate her – by rapidly building these behaviours into our civilisation.
    Therefore the arguments for green building just got far more compelling as a result (because to be green is actually following a law of nature, the fundamental geometry of which is an icon of art, design and architecture). But will it make our jobs as designers easier (as Einstein remarked to Corb regarding his Modulor), or harder (since we are dealing with a process, not a state – a function, not a form)? 

    So beauty is super-positions of complexity, optimally, analogically, dynamically symmetrically & economically, elegantly realised. 
    Beauty and sustainability are one.
    Kind regards,
    Nigel Reading RIBA
    Design Director
    ASYNSIS Architecture + Design

    • GregoryJRader

      Please to hear that you found this a fair treatment.  In general I find that people struggle with any implication top-down causation.  They either adopt creationist attitudes (i.e. some “intelligent” agent must be the cause) or at the other extreme they deny that top-down causation is even possible (i.e. reductionism).

      The commonality between both attitudes is the assumption that only agents can act as causes; for the reductionist the underlying atomic unit plays the role of causative agent.  But of course, all dynamical processes take place within some particular environment, and those environments impose constraints on the “agents” within.  It really should not be so surprising that consistent patterns emerge across diverse agent/environment interactions.  It also should not be so surprising that those patterns can only be explained by studying the nature of agent/environment negotiations, and cannot be explained by studying either in isolation. 

  • Pingback: Involving Platforms of Evolutionary Change | OnTheSpiral

  • Pingback: Entrepreneurial Process vs Epiphany | OnTheSpiral

31 Flares Twitter 21 Google+ 0 Facebook 7 LinkedIn 3 Email -- 31 Flares ×