Blinding Science update.

 

Rack of Violins

Last week’s article got quite a few clicks, and yesterday I received a comment from Claudia Fritz, who supervised the much-discussed research. I detected a slightly elevated fur level, so it seems like a thoughtful reply is in order.

Here is the comment from Claudia Fritz in its entirety:

Frank,

Thank you very much for your reactions, based not on what you have read in the press (often inaccurate, with little details about the experiment and with many extrapolations) but on the PNAS paper itself. After having read your first few paragraphs, I was impressed: you had read our paper carefully, and your report was accurate. But, unfortunately, not entirely. You seem to have forgotten to read the list of the authors. Otherwise, how could you have considered Joseph Curtin, one of the most renowned makers as “a scientist who is unfamiliar with how professionals seriously assess string instruments”?

If I may, some of your criticisms are thus unfair: with Joseph on board, we did have “a luthier on hand” and regarding your comment ” I enjoyed the confident visual assessment of the strings”, I can tell you that the assessment was much more thorough than this. Joseph Curtin and Fan-Chia Tao (an engineer for the string company D’Addario (!) as well as a good player) are well qualified to judge whether the strings were good.

Now, may I ask you a few questions?

How can you write ” It is true that many violinists try instruments in a “dead” room as a first step ” and then write that the experiment was not valid because it was run in a dead room? Remember, the question we asked was “Which violin would you like to take home” (meaning so you can try it further), not ‘Which one would you like to buy” as we know that violinist can’t buy a violin without having played it for a while, and in different venues (in particular in a concert hall). Yes, maybe the results would be different in a large hall (we’re hoping to test this in a near future), we totally agree with you on this, but this doesn’t change the results of our study, nor makes it invalid. Our conclusions were clearly stated “under the ear, in a dead room”. We are not extrapolating (though, unfortunately some people are) about what would happen in a large hall.

My second question is: why do you think that “the methodology of the study virtually guaranteed that the modern instruments would be favorably compared to the older”? It was the exact same conditions for all violins! Isn’t it biased to say that a hotel room and a small amount of time is fine for new violins but a concert hall and a lot of time is needed for old violins? And who is going to take a violin home and/or to a concert hall if he doesn’t like it after a relatively short trial in a maker’s shop (usually quite dry acoustically)? Nobody … at least if it’s a modern violin. Now tell them it’s a Strad … and they will all want to take it!

Looking forward to hearing from you. All comments are welcome to design another study to cover the limitations of this one. Research with humans works like this; you can never test everything in one experiment, for practical reasons: humans get tired so you usually can’t involve them in an experiment which is longer than 2 hours, and are busy with their own life so it’s hard to have them come for many sessions. So you need to go step by step, and although this can be frustrating sometimes, it’s how it works!

Best wishes.

It doesn’t appear that she read my article with a great deal of precision, but I’ll attempt to focus on the main points above:

[dropcap]1[/dropcap]I did indeed look closely at the list of authors, as it would have been unusual and pretty miraculous to randomly guess that Joseph Curtin had participated. You refer to a sentence later in my piece regarding the overall impression when reading the study:

“Unsurprisingly, the paper itself feels like it was written by, well, a bunch of scientists who are unfamiliar with how professionals seriously assess string instruments.”

That’s not a judgment or criticism of the eminent Mr. Curtin in any way, although I have no idea what his credentials as a scientist are. It’s a personal opinion on how the paper reads on the whole. To reconsider, I did my own little experiment and evaluated the whole dissertation again while sipping a large glass of Maker’s Mark. Same conclusion, and the jargon and graphs still weren’t all that engaging to me. Regarding having a luthier around, again she seems to have missed the context of my statement in the article. I meant having a luthier participate, someone that might actually be able to make adjustments during a study of this type. Not simply having one on the premises.

[dropcap]2[/dropcap]Regarding the assessment of the strings, the insight above is helpful (and not mentioned in the paper itself). But ultimately the opinions of the two gentlemen are irrelevant, because a) they weren’t participants in the study and b) as they both undoubtedly know, strings are highly individual, subjective preferences that make a huge difference when evaluating any instrument. Some people hate Dominant strings no matter how new. Some love D’Addario (mostly guitarists, in my experience). I’m partial to Vision Solo, but they don’t work well on every violin. The point is this: no changes or adjustments were allowed on the older instruments, and no one really knew how old the strings were. So under controlled conditions, for two people to say “hey, they look great” was kind of funny to me.

[dropcap]3[/dropcap]To the first question, it’s worth noting that I never wrote that the experiment was not valid. But this seems to point to the main criticisms I’ve repeatedly heard and read regarding the study, although I do not agree with all of them. To me, the problem is that the initial press accounts went something like this (obviously I’m paraphrasing):

“This rigorous scientific study proves that a bunch of professional violinists can’t tell the difference between these really expensive Strads, del Gesus, and modern instruments, so the modern ones must be just as good. It’s all a bunch of hype.”

Any press quotes I saw from the authors of the study pretty much agreed with that statement, or at least didn’t contradict it. But there’s a sentence in the paper itself that is really important:

“ The study was designed not to test the objective qualities of the instruments, but the subjective preferences of the subjects under a specific set of conditions”.

This received almost no public attention at all, nor did many of the questions Ms. Fritz cites above. To me this acknowledges that a genuine industry-standard evaluation and comparison of all six instruments is not possible in a hotel room, with very little playing time and no possibility of adjustments on the older instruments. Unless you read the paper itself, you would be unaware that the real question of the study seems to be “what will be the initial, momentary impressions of some professional violinists under these very specific conditions?” That’s entirely different than what was widely reported, and given the chance to clarify, the (quoted) authors have not done so with any degree of detail.

[dropcap]4[/dropcap]Finally, the answer to the second question is pretty straightforward. The modern violins were presumably in very good adjustment and condition (since that was allowed), and superior examples of each maker’s work. And as I noted in my article, many newer instruments sound spectacular under the ear in a small room, but less so in a concert hall.

Because of the lending restrictions, it is impossible to know whether the older ones sounded anywhere near as great as they could, even for 10 minutes in a dead room right under your ear. Even simple and common adjustments that might take place in a violin shop were disallowed, such as changing strings or moving the bridge a bit. So a cursory opinion of each instrument is certainly possible, just not very reliable. As I also stated in the article, it doesn’t seem surprising that people’s perceptions were altered by the prior knowledge that some of the instruments were Strads. But this study seemed to go past that idea in a sort of jumbled way, complicated by the press accounts. Incidentally, I know several musicians who prefer superior modern instruments to lesser examples of old Italian masters’ work (Stradivari included). But it took them more than 20 minutes in a hotel room to reach that conclusion.

I think the study raises some interesting points and has certainly provoked a lot of discussion; I also think it’s incomplete and misleading in certain ways. I’d still invite anyone to read it for themselves and draw their own conclusions. I thank Ms. Fritz for taking the time to comment, and look forward to the next phase of her research.

 

 

 

7 thoughts on “Blinding Science update.”

  1. There are so many reasons the test was meaningless that it is difficult to know where to start. I love new instruments and own about 15 myself but never feel any need to make a comparison to anything else. I don’t even like comparing my new instruments to each other because I would feel like I was betraying the makers. I do wonder why the comparisons are always to Strads and del Gesus. Why not start by comparing them to Testores and Gaglianos. Are the makers that confident that their instruments are superior to everything below Strads and del Gesus?

    Reply
  2. Frank,
    Thanks for your answers. The discussion could go forever, but there is no point arguing any further in writting on this blog. We should rather discuss this over a glass of Maker’s Mark (I’m very fond of whiskey though I only know the Scottish and Irish ones – I have btw a very nice collection at home!). 😉

    Putting aside the limitations of this particular experiment, do you believe that you, personally, could distinguish between old and new violins in a double-blind situation – given a test-room you consider suitable for the task? And are you confident that the particular Strad you play, set-up just the way you like it, would be preferred over new instruments by listeners in a hall? Or two halls? Would you be interested in finding out? This is a serious offer, as we would welcome the chance to work with a violinist of your caliber and experience.

    Best wishes.

    Reply
    • Hi Claudia,
      Sure, MM always a good idea when discussing anything scientific. And I have no idea what my response would be in the test conditions you describe, although I confess I probably wouldn’t be that interested in the hotel room approach; I don’t think that really proves anything. But I’m fairly confident the violin I’m now playing would come out on top in a hall compared to most modern instruments, but you never know, I suppose. Depends who’s listening, of course (and who’s playing, and a host of other things). Regardless, I’m thankful for every day I can play it….

      Reply
    • How would you design a test in which Frank Almond plays his familiar Strad and a modern instrument so listeners in a concert hall can form preferences? How would you control biases of both the listeners and Frank Almond? How would you eliminate all other variables other than the violins being compared? Each performance will be unique. Seems that will add a great deal of noise to the data. I don’t know but seems to me that Frank would likely be able to distinguish his familiar violin from a new one that he has not played even if you put dark goggles on him and perfume on the violins. Seems like a very very difficult test to design.

      Reply
  3. Scientific research results, if they do *prove* something, I think it’s always important to look precisely at what really they prove, and in which way – and the “reality context” of the conditions . Thanks, Frank, for putting your finger on most any of the open questions.
    The study for me proves this: In presence of a critically attending scientific jury with a focus on an interesting question (that is not directly related to actually interpreting a piece of music), in a hotel room, wearing some alien glasses, differences of instruments most musicians & music listeners think are important, are not recognized anymore by skilled violinists.
    This is something, it’s interesting – but only relatively so, because it lacks true reality context. (Although the study tries to say so.)
    If you cut off normal visual cues (“The participants wore modified welder’s goggles”), even wearing an unfamiliar goggle on your noes, you notably upset proprioception. I would assume that “proper proprioception” is an important part in playing an instrument, and even more to get solid feeling for the judgement of an instrument (which task probably already is somehow outside of interpreting, communicating and/or “simply” receiving the sounds “as is”, and is difficult to do objectively.).
    More subtle aspects of proprioception include feeling at home, familiar, at ease etc. around people and/or in a room. When I think about hotel rooms, and their possible, very diverging “atmosphere”, including the presence of sometimes very strong WiFi fields, the situation in a hotel room wan’t be optimal probably in the majority of cases, actually far from.
    I’d say, the test setup induced a major set of own disturbances (subtle stress) that superseded the perception of differences under observation.
    It may “proof” indeed, that playing for a jury at an unknown place, while may be necessary…, might not be the best place to test or decide on a new instrument.
    But who cares about this insight anyway 🙂 ?

    Reply

Leave a Reply to John Parks Cancel reply