Let’s just get to the point. There is a study to be published in the linguistic journal Language in June 2019. Stenonymous covered this immediately. Succinctly the study showed that court reporters in the Philadelphia area were pretty inaccurate when dealing with the dialect of African American English. We had some suspicions about potential inaccuracy in the way the news was reporting it, and kept an eye out for information as it developed.
In early March, we came across new articles which identified one of the hard-working linguists on the study, Taylor Jones. Upon review of Mr. Jones’s blog — soon to be Dr. Jones as far as we’re aware — we reached out and he responded to everything we had to ask.
Though we haven’t yet gotten to see the study, between correspondence with Jones, review of his blog, and review of media coverage on the topic, we have some conclusions to present:
- The court reporters were reporters working in court.
- It’s true that stenographic court reporters were used.
- The trials were not testing the reporters’ real-time accuracy, and participants were given as much time as they wanted to transcribe.
- The accuracy of sentences was only 59.5% correct. When measuring word-for-word accuracy the accuracy was as much as 82.9%. Obviously, our stenographer training measures word-to-word accuracy.
- Small “errors” were not counted as errors, such as if a speaker said “when you tryna go to the store?” Trying to and tryin’ to would both be counted as correct. An error would be “when he tries to go.” So the errors, as best I can tell, would fall in line with what NCRA says constitutes an error.
- Misunderstandings come from a number of different sources, including common phonetic misunderstandings and dialect-motivated misunderstandings as discussed in William Labov’s Principles of Linguistic Change trilogy. While Jones himself said bias cannot be ruled out, there are a number of syntactical and accent-related issues that may honestly be a challenge for court reporters and the average judge, juror, or listener.
- There were over 2,200 observations done in this study. 83 statements multiplied by 27 court reporters.
Now for some interesting highlights from my exchange with Jones:
- African American English is not wrong. It is not slang. It has grammar and structure. It’s not slang, Ebonics, or street talk.
- The people that conducted the study are not accusing court reporters of doing anything wrong. In fact, in my conversation with Jones, he was supportive of a human stenographer over an AI or automatic transcription because we still carry a far greater accuracy than those alternatives.
So here is where we are: We’ve got a piece of evidence from the linguistic community that there is an area we can improve on. I had briefly been in touch with a Culture Point representative who said they can work with organizations around the country on their transcription suite package, and that the budget for the workshop varies dependent on modality and class size.
We should all do our best to incorporate these ideas into our work and training. If you are a state or national association, don’t shy away from the opportunity to dive in and develop training surrounding different dialects, or even fund studies to seek out these deficiencies. If you are a working reporter, don’t be afraid to ask for a repetition. You are the guardian of an accurate and true record, and our work collectively can impact people’s lives and fortunes.
Short last note, I apologize to my readers and to Mr. Jones. I had promised my readers I’d get this article out and the email exchange out much sooner. I feel this is important and want to be a part of spreading the message that we can always do better. Though the initial response by Mr. Jones was March 8, I was unable to get this draft out until April 2. For that, I am sorry.
May 23, 2019 update: This came up in the news again and another person brought to my attention this draft of the study made available before its publication in the Language journal. It was noted by that person that the reporters were asked to paraphrase what was said, and that we do not interpret. My understanding and memory from my email with Jones is that they were asked to transcribe and interpret, and that at least one participant transcribed incorrectly but interpreted perfectly.
June 6, 2019 update:
Philadelphia judges came together to discuss language access after the study. As of this article, it seems the solution would be more training for court personnel than having interpreters for different English dialects.
September 13, 2019 update:
Another article popped up, ostensibly on this same study. With great respect to those article writers, I believe the headline that white court reporters don’t get black testimony is incorrect. I also believe that the contention that this is slang or Ebonics is incorrect. When I wrote Jones he was very clear that AAE is not slang. It’s a dialect. It has rules. I do hope that people really read the work for what it is and not what they want it to be. People mishear things. Judges and juries mishear things. This study brings to light that even we, the people who care most about every word said, can mishear things, and that makes it very, very important to be situationally aware and ask for clarification when it is appropriate, like many of us do every day.
January 28, 2020 update:
It should be noted that Mr. Jones, presumably now Dr. Jones, is listed as a co-founder of Culture Point on LinkedIn.
After some time I had an interview with VICE about this study because I was identified as being a stenographic reporter with a lot of knowledge on it. I will say while, in my mind, it showed us we must do better, ultimately it confirmed that we are people’s best chance at being understood in the courtroom. The pilot study 1 showed regular people were about 40 percent accurate. The pilot study 2 showed lawyers were 60 percent accurate. We were about 80 percent accurate. Clearly, we all want 100 percent, but when you read that we’re twice as good as your average person at taking down this dialect, it changes the spin. Later on, a Stanford study showed that automatic speech recognition had 20 percent error rate in “white speech,” 40 percent error rate in “black speech,” and worse with African American English dialect. When I graded the AAE example on their site, I saw that if it had been a steno test, it would be 20/100! It’s our skill and dedication that keeps us top quality in making the record and broadcast captioning.