In this video, we're going to talk about expert evidence or testimony. And some of the issues around that. While we talk about some of the difficulties that jurors have with expert testimony, will also talk about some of the circumstances we might expect that jurors will struggle but in fact, they cope just fine.
Experts are called to give evidence on issues that are outside the scope of the average person's knowledge, basically for topics beyond common sense and where the truth of the matter may be counter intuitive. Experts might provide evidence about the likelihood that our DNA sample would match a person by chance, the degree of match between two fingerprints or perhaps footprints, or even the match between two bullets. Experts might also provide evidence about financial matters or even a defendant’s state of mind, as noted by Gemberling and Cramer in 2014, amongst others, experts can also play a more educational role, explaining the factors that might affect the reliability of our witnesses’ testimony, explaining when child witnesses can be relied upon or providing context for the behaviour of a woman who kills her spouse after years of abuse. These are just some examples.
In some cases, experts provide their testimony as friends of the court where in others analysed, they testify in an adversarial setting where there might be one or more experts for each side of the case. While, some critics argue that jurors don't have the ability to understand such scientific and technical evidence. Bornstein and Greene in 2011 point out that interviews with jurors suggest that jurors actually do have the capacity to think about such evidence carefully. And that the jurors deliberations reflect that diligence. Research shows that jurors also report that they thoroughly review the evidence when reaching a verdict.
While Vidmar and Diamond in 2001 point out that such research suggests that jurors are using systematic processing, it's worth noting that this research primarily relied on self-reports of jurors experiences. There are some issues around self report data, and this is the type of data that comes from participants own reflections on their behaviours and beliefs. We'll talk more about these issues in the video on jury directions. Suffice to say research that doesn't rely on self report data indicates that there are some conditions where jurors don't just rely on the content of the expert evidence.
So what are some of the challenges with expert testimony, when a juror is influenced by factors other than just the content of the experts testimony? And talking about expert testimony jurors are often required to evaluate highly complex technical and scientific materials as we’ve said.
As you've heard in the video about how jurors make decisions when information becomes complex and difficult to evaluate systematically? We tend to rely on shortcuts, which we call heuristics, to help us evaluate information, this is precisely because expert testimony can often be complex that researchers have been interested in whether jurors can make sense of the testimony and whether there are any cues or shortcuts that they might rely on instead.
In this next part we're going to talk about how things about the expert, for example, who they are, what their experience is, and also whether they're being paid to testify, potentially influence how jurors perceive the expert testimony. One of the first studies looking at this was conducted by Cooper and colleagues in 1996. They asked participants to watch a video tape recreation of a trial in which two scientists gave expert testimony about one aspect of the case. The plaintiff's expert was described as having either strong or weak credentials and gave his evidence using simple or complex language. As predicted, the extra legal cue of expert credentials did influence the participants. They were more persuaded by the expert who had strong credentials. But only when the experts testimony was complex, When the testimony was simple, participants were influenced by what the experts said rather than the experts credentials. This suggests that participants were using the cue of the experts credentials when they couldn't directly evaluate what the experts said because the language was too difficult to understand. An experts’ gender can have a similar effect of credentials in that it can work as a cue that influences how persuasive the expertise is in the eyes of the perceivers or the jurors. While some earlier research by Swenson colleagues in 1994 suggested that female experts were more persuasive than male experts, subsequent research by Regina Schuller, me and a number of our colleagues, suggests that when an experts’ gender matches the domain that they're testifying about, it’s then that their testimony is evaluated more positively and mock jurors award higher damages.
For example, if a male expert testifies about a commercial price fixing agreement in the context of a business engaged in rock crushing, he'll be more persuasive than a female expert giving exactly the same testimony, whereas a female expert would be more persuasive giving the same price fixing related testimony when their businesses were in perfume supplies.
Our 2005 research also suggests, just as Cooper and colleagues found, with expert credentials, that this is used as a heuristic cue as the effect is greatest when testimony is complex and difficult to understand. We also found in some subscript research that the type of language that an expert uses can also be relied upon as a cue by perceivers. This is because we have expectations about how men and women typically use language. We expect men to use more complex language compared to women.
Our 2013 study suggested, consistent with this, that male experts were more persuasive compared to female experts when they use complex language and perceivers were under cognitive load. The experts behaviour also matters. Cramer and colleagues found that moderate levels of confidence in delivering testimony led to an expert being perceived as most credible by mock jurors. It's not just an expert's credentials and gender that matter, either. Whether an expert is one who testifies in court frequently and is one who is paid for his or her services, also affects how jurors think about the experts’ testimony. This is known as the hired gun effect and was originally studied by Cooper and Neuhaus in 2000. They conducted 3 experiments to look at whether sometimes jurors fall back on cognitive shortcuts when thinking about expert testimony in the context of the hired gun effect. This is the idea that if an expert appears to be a gun for hire in that they testify relatively frequently for a fee, the testimony might be less convincing because it looks like they're just in it for the money. This is essentially what this study has found. Participants who heard from a well paid expert who frequently testified, found that expert less trustworthy and were less persuaded by that testimony. Their third studies suggested that this was indeed the result of a cognitive shortcut. The effect was present when the testimony was presented in complex but not simple language. As we heard previously, shortcuts such as heuristics are most prominent when perceivers are not able to think systematically through the information. The complex language made it difficult for participants to systematically evaluate the experts testimony so they fell back on the shortcut. If the experts doing it for the money, he or she is a hired gun and shouldn't be trusted so much.
So complex testimony seems to be problematic, but about some of the most complex information that the lay juror might have to grapple with. What about statistics and probability? There's been some concern expressed by various courts that testimony including probabilities would unduly influence jurors. Kaye and Koehler in 1991 reviewed the research conducted up to that point and concluded however that jurors actually underweigh statistical testimony when it's presented in the context of other evidence. Kaye and colleagues research suggests that while jurors are susceptible to some fallacies when reasoning about probability, these tend to favour the defence compared to the prosecution. In this study, 480 people called for jury service was shown a 70 minute film about a mock trial that included DNA evidence that suggested that only one in 5072 caucasian men had DNA types that matched the sample from the scene of the crime. Participants then deliberated in groups of eight. Rather than being overly persuaded by the probability, Kaye and colleagues found that the jurors generally coped with the evidence and, if anything, were conservative in estimating the likelihood that the sample came from the defendant.
How statistical evidence is presented matters, though, and it can make the evidence more or less persuasive. Koehler reported three experiments in 2001 that manipulated how statistical evidence was presented, and was able to show that when presented one way, the evidence convinced legal decision makers that the suspect was almost certainly the source of the DNA material, but when presented another way, led to a substantial proportion of people to conclude that the suspect could not have been a source of the DNA material. Koehler was testing his theory about how people make sense of probabilistic statistical evidence, specifically, that the cognitive availability of coincidental match exemplars determines what the evidence says about guilt. In other words, if we can easily think of examples of other people who might match the DNA sample by chance, then we think the defendant or suspect is less likely to be guilty. Koehler was able to manipulate how easy it was for people to think of such exemplars by changing the framing of the statistical information. The probability of a match was presented either as a percentage in relation to a single target, such as a 0.1% likelihood of a chance match with this specific defendant, or an equivalent frequency in relation to multiple targets, one in 1000 people in a specific city.
The first presentation is designed to make it difficult to think of examples of other chance matches to the DNA sample. The probability that it is a chance match seems like it's a very small number, in fact close to zero, so the defendant must be the source of the DNA, the second form of presentation, it's actually easy to reason that if the probability of a chance match is one in 1000, and let's assume there are 1,000,000 people in the city, that means there are about 1000 people who would match the DNA by chance alone. Now, this sounds like it's pretty easy to find someone who would match the sample by chance. So it's quite possible that the defendant matches the sample by chance, not because he or she is the actual source of DNA.
So despite both of the statistics in these examples being equivalent, the first was much more persuasive and led people to conclude that defender was the actual source of the DNA. Koehler found that this effect of presentation did, however, disappear as the probabilities became smaller and smaller. This is because as a reference group used to describe the probabilities approaches the size of the population being used for context, or become so large that we're basically referring to the entire population of the earth, it becomes harder to imagine chance matches. So for example say the probability is one in one million, and we presented that probability as either 0.0001% that the sample matched the defendant by chance, or that the sample would match one in 1 million people in the city of 1,000,000 people by chance. There would be very little difference in the persuasiveness of each frame, as it seems very hard to imagine matches by chance alone in either form of presentation.
Next we're going to talk about how the media betrays expert testimony, in particular expert forensic testimony, and how that might influence how jurors perceived the testimony. Now we could have easily talked about these effects in that media exposure video as they are another form of what we've called media exposure effects, this time specific to how experts are perceived. Often these types of effects are called the CSI effect. When we're talking about the CSI effect, we need to be clear about what we mean however. There are three possible ways that this effect could be defined, according to Podlas in 2006. The first is that CSI increases the lay public's interest in forensic science, the 2nd that CSI creates unreasonable expectations about forensic evidence on the part of the jurors, making a conviction less likely. And the third is that CSI raises the credibility of scientific evidence so that it is seen as almost infallible, making a conviction more likely.
As you can see, those last two effects are actually opposite in direction. Podlas in 2006 surveyed 306 university students, asking them about how often they view CSI related television shows. Participants were asked to also consider a hypothetical case of rape and reach a decision about what the verdict should be. In this study, there was no difference in verdicts for those who watch CSI related television shows compared to those who did not watch those shows. Thus Podlas concluded that there was no effect of watching CSI related shows in terms of the first two definitions we mentioned before. In contrast, Schweitzer and Saks in 2007 used a very similar paradigm and found that watching CSI and crime programmes made participants more critical of forensic evidence in some circumstances, such as when that evidence is ambiguous. Those studies use university students. What about with real jurors? What do they think about CSI related evidence?
Shelton and colleagues in 2006 conducted the study of actual jurors. They did not find this type of effect, however. They found that watching CSI type shows didn't actually influence verdicts. Now, given that sometimes we find evidence of a CSI effect and sometimes we don't, this suggests there might be other variables which are involved in the relationship between watching CSI shows and outcomes such as conviction rates. One possibility is that CSI exposure has its effect through raising expectations about evidence, and it's these expectations that have an influence on verdicts.
Kim and colleagues set out to test this in their 2009 study. In that study, they surveyed 1027 actual jurors in Michigan, the USA and participants were asked to read about three different case types and asked how willing they would be to convict a defendant when there was no scientific evidence and only A) circumstantial evidence or B) Eyewitness testimony. Participants were also asked to report how often they watch CSI drama shows, along with their expectations about receiving scientific evidence of some kind. What they found was that watching CSI shows had an indirect effect on convictions through raising expectations about scientific evidence, but only for circumstantial evidence. So while jurors do seem to be able to make sense of some pretty complex and difficult to understand testimony, they don't seem to be too badly misled by the media portrayal of forensic evidence or statistical information. Sometimes they rely on shortcuts when the going gets tough. One potential way to alleviate that is for experts to use the simplest and clearest language possible.
6.3 Expert testimony video
Back to Issues with expert witnesses/testimony