Yes. I think part of the problem is this idea that you can evaluate a paper by some rigid checklist and that all papers follow some type of standard format and that all studies deal with the same problems. Nothing of the sort. Hell, you have to trust that the study authors actually reported everything accurately and you can't even do that.
But let me give an example of the kind of background thinking that you have to undertake before you can even begin to evaluate a study. Let's say there is a study on caffeine and exercise performance. I use this example because there are many such studies.
Ok, so you see this study and you want to read it and see it you think the conclusions match the data. So in the study they use coffee as the caffeine delivery system. Without the fancy terms that means that the study participants drink coffee before working out and then they are monitored.
For a control group they use decaffeinated coffee. It's double blinded, meaning that neither the participants are the researchers handing out the coffee know who's getting which coffee.
So far so good.
Now let's say they realize that the coffee has a bunch of other chemicals in it and that any number of these could interact with or interfere with the actions of the caffeine. So they figure that in the interest of thoroughness they will have another group that is given a caffeine pill that is equivalent in caffeine to the coffee. Okay? But wait a minute, they need to deal with the placebo effect again so they also give another group a sugar pill.
Now, at the end of this bastardized study they collect all their data and find conclude that caffeine pills with the equivalent amount of caffeine as a cup of coffee result in better exercise endurance than coffee.
Now, they used double blinding and control groups. The attrition rate was good. Do you except these conclusions?
No. The study is a piece of shit. In this hodgepodge study so many variables were introduced it is ridiculous. You can't mix and match pills and liquids. Right off the bat someone given a pill is going to expect a greater effect. You also have destroyed the researcher's blinding. Because while they may not know who was given which coffee they do know who was given a pill versus coffee. This means that can easily impart certain expectations to the study participants..even just with subtle attitudes and body language. Let's be clear. This does not mean that caffeine pills do not enhance endurance better than coffee it simply means that the study is not good evidence of this.
Okay, so that study I described is a stupid example, right? Nobody would design a study so badly. Think again. MOST studies are that bad. Bad studies are the rule, not the exception! There are all sorts of researchers publishing studies in all sorts of "journals" that do not have a clue how to design and implement a proper study..no more a clue than I do which is no clue at all. YET, I see people who apparently think they understand how to evaluate studies discussing the conclusions of these types of studies all the time. We do not need to discuss the conclusions of the study! It's an invalid study. Throw it out! Ignore it.
So you want to know about reading studies?
Rule number one: Most studies suck and aren't worth the paper they are written on.
There is an example where I discuss some problems with a particular study on weight training injuries in Recreational Weight Training Makes You More Prone To Shoulder Injury?. This further exemplifies the kind of background knowledge that goes into it.