Stenonymous on VICE News Tonight

About four months ago, I sat down with Alzo Slade and talked with VICE about the study that showed court reporters had only 80 percent accuracy when taking down African American English dialect (AAE). It aired 6/18/20. There’s a Youtube mirror. This study was a shocker for many because people look at our general accuracy of 95 percent, and then they look to a number like 80 percent, and it worries them. It worried me at the time, and I continued to cover it on this blog as more information came out. I was at VICE HQ Brooklyn for two hours, but only a few seconds made it into the segment, so please be understanding when it comes to what “made the cut.”

I was identified as a stenographic reporter with a lot of knowledge about the study. We all have a choice to make when approached by the press or any individual. Stonewall or try to present the facts? I chose the latter this time. A few things I would love to see more widely talked about:

  • AAE is not spoken by all black people. It’s a specific English dialect. I learned it also has rules and structure. It’s not “slang.”
  • Despite most of us having no formal training, we get it right about twice as often as the average person and 1.5 more often than the average lawyer, if you look at the pilot studies. There’s also no good alternative. AI does worse on all speakers and even worse than that on AAE. We’re talking as low as 20 percent accuracy.
  • In actual court cases we have some context. We don’t just take down random lines. This doesn’t prevent all errors, but it helps court reporters a lot.
  • We don’t interpret. People concerned with our interpretations don’t always realize that. Interpreting only matters in terms of correctly interpreting what we’ve heard. Interpretation of jurors and lawyers matters much more, which is why it’s so important for us to get the words correctly for them. We can educate people on this topic and help them understand big time.
  • This issue is not necessarily a racial or racist one. Mr. Slade himself read the AAE sentence on paper during the segment “She don’t stay, been talking about when he done got it.” His response was something like “what the hell is this?” Anybody can have trouble with a new dialect. I know I have heard some AAE statements and done very well, and heard other AAE statements and done poorly. I’m big on the opinion that exposure is the only way to get better.
  • Studies like this only highlight the need for stenographic court reporters that truly care about the record. If you meet a young person interested in courtroom equality, it might be worth having the “become a court reporter” talk. We care, and we want every single person that fills our shortage to care too.

One thing I learned from this media appearance is always keep your cool. At one point during my two hours there I felt very defensive and even a little worried they’d edit the segment in a way that was not fair to me. I kept my cool and continued the interview. That fear comes out totally unfounded! I am sure if I had overreacted, that overreaction would’ve been the face of steno, and that’s not cool!

Each stenographer is like an ambassador for who we are and what we do. A big part of what I do is getting to the bottom of things and communicating the truth about them so that each of us can go forward and be knowledgeable when the people we work with, judges or lawyers, bring this stuff up. Many of them already know we’re the best there is. The rest are just waiting for you. Your actions and excellence change the future every day. I got my five seconds of fame. Go get yours!


Sometime after the publishing of this article, the VICE story that I linked was locked on their website. You must select your TV provider to gain access. Also, I later learned Alzo actually aced the quiz. The reason he had trouble was because the sentence was not AAE / proper grammatically.

Written Knowledge Test Randomizer

ATTENTION WINDOWS USERS: Click and play version here. NO installation required. Download the zip, unzip it, and double click the .exe.

If you support projects like this, feel free to show it by buying a Sad Iron Stenographer Mug, donating, sharing this post, or suggesting questions to increase the variation in mock tests.

I’ve created a computer program that chooses preselected questions at random and creates a WKT-style test. It also creates an answer key. It uses .txt format so pretty much every computer since Windows 95 can run it. Note that for all of this stuff you should use a laptop or desktop. Using a mobile phone will make using these materials much harder. The program will change the numeral of each question every time, as well as randomize whether its answer is A, B, C, or D.

Basically, take a practice test or two, see how well you do, and if you see things you don’t know, look them up. You’ll be doing yourself a huge favor for your next written-knowledge style test.

See my previous comments on studying for legal and medical terminology.

If you hate computers, you can get 26 randomized tests here in a .zip folder.

If you want to use the program for yourself but don’t know how it works, check out my video tutorial here.

If you don’t like video tutorials, try the following:

  1. Download and install Python 3. It probably won’t matter if it’s 3.6, or 3.7.
  2. Go to the code for my computer program. Copy and paste it into a notepad file. If you are confused, the computer program is the text labeled 001 WKT Generator
  3. Save the notepad file and close it. You can name it anything. I suggest you call it ChrisDayIsAnnoying.
  4. Change the .txt that you just saved to a .py. Read this if you do not know how to show file extensions or do not see .txt.
  5. Now you have a .py file. It’ll look something like Take that .py file and stick it in a folder by itself. You don’t have to, but it’ll make your life easier.
  6. Double click the .py file, or right click it and run/open it. It’s going to come up with a black box, say some words, and then you’re going to press enter, and the box is going to go away.
  7. When the box goes away, in the folder with your .py file will be two files, Mock Test.txt and Answer Key.txt. You now have a random mock test and its answer key,
  8. Special note, if you intend to run the program again, you must change the name of the Mock Test and Answer Key. The program creates a new Mock Test.txt and Answer Key.txt every time, and it will overwrite any files that have the same exact name as Mock Test..txt and Answer Key.txt.

Language Study and Service Revisited

Let’s just get to the point. There is a study to be published in the linguistic journal Language in June 2019. Stenonymous covered this immediately. Succinctly the study showed that court reporters in the Philadelphia area were pretty inaccurate when dealing with the dialect of African American English. We had some suspicions about potential inaccuracy in the way the news was reporting it, and kept an eye out for information as it developed.

In early March, we came across new articles which identified one of the hard-working linguists on the study, Taylor Jones. Upon review of Mr. Jones’s blog — soon to be Dr. Jones as far as we’re aware — we reached out and he responded to everything we had to ask.

Though we haven’t yet gotten to see the study, between correspondence with Jones, review of his blog, and review of media coverage on the topic, we have some conclusions to present:

  • The court reporters were reporters working in court.
  • It’s true that stenographic court reporters were used.
  • The trials were not testing the reporters’ real-time accuracy, and participants were given as much time as they wanted to transcribe.
  • The accuracy of sentences was only 59.5% correct. When measuring word-for-word accuracy the accuracy was as much as 82.9%. Obviously, our stenographer training measures word-to-word accuracy.
  • Small “errors” were not counted as errors, such as if a speaker said “when you tryna go to the store?” Trying to and tryin’ to would both be counted as correct. An error would be “when he tries to go.” So the errors, as best I can tell, would fall in line with what NCRA says constitutes an error.
  • Misunderstandings come from a number of different sources, including common phonetic misunderstandings and dialect-motivated misunderstandings as discussed in William Labov’s Principles of Linguistic Change trilogy. While Jones himself said bias cannot be ruled out, there are a number of syntactical and accent-related issues that may honestly be a challenge for court reporters and the average judge, juror, or listener.
  • There were over 2,200 observations done in this study. 83 statements multiplied by 27 court reporters.

Now for some interesting highlights from my exchange with Jones:

  1. African American English is not wrong. It is not slang. It has grammar and structure. It’s not slang, Ebonics, or street talk.
  2. The people that conducted the study are not accusing court reporters of doing anything wrong. In fact, in my conversation with Jones, he was supportive of a human stenographer over an AI or automatic transcription because we still carry a far greater accuracy than those alternatives.

So here is where we are: We’ve got a piece of evidence from the linguistic community that there is an area we can improve on. I had briefly been in touch with a Culture Point representative who said they can work with organizations around the country on their transcription suite package, and that the budget for the workshop varies dependent on modality and class size.

We should all do our best to incorporate these ideas into our work and training. If you are a state or national association, don’t shy away from the opportunity to dive in and develop training surrounding different dialects, or even fund studies to seek out these deficiencies. If you are a working reporter, don’t be afraid to ask for a repetition. You are the guardian of an accurate and true record, and our work collectively can impact people’s lives and fortunes.

Short last note, I apologize to my readers and to Mr. Jones. I had promised my readers I’d get this article out and the email exchange out much sooner. I feel this is important and want to be a part of spreading the message that we can always do better. Though the initial response by Mr. Jones was March 8, I was unable to get this draft out until April 2. For that, I am sorry.

May 23, 2019 update: This came up in the news again and another person brought to my attention this draft of the study made available before its publication in the Language journal. It was noted by that person that the reporters were asked to paraphrase what was said, and that we do not interpret. My understanding and memory from my email with Jones is that they were asked to transcribe and interpret, and that at least one participant transcribed incorrectly but interpreted perfectly.

June 6, 2019 update:

Philadelphia judges came together to discuss language access after the study. As of this article, it seems the solution would be more training for court personnel than having interpreters for different English dialects.

September 13, 2019 update:

Another article popped up, ostensibly on this same study. With great respect to those article writers, I believe the headline that white court reporters don’t get black testimony is incorrect. I also believe that the contention that this is slang or Ebonics is incorrect. When I wrote Jones he was very clear that AAE is not slang. It’s a dialect. It has rules. I do hope that people really read the work for what it is and not what they want it to be. People mishear things. Judges and juries mishear things. This study brings to light that even we, the people who care most about every word said, can mishear things, and that makes it very, very important to be situationally aware and ask for clarification when it is appropriate, like many of us do every day.

January 28, 2020 update:

It should be noted that Mr. Jones, presumably now Dr. Jones, is listed as a co-founder of Culture Point on LinkedIn.


After some time I had an interview with VICE about this study because I was identified as being a stenographic reporter with a lot of knowledge on it. I will say while, in my mind, it showed us we must do better, ultimately it confirmed that we are people’s best chance at being understood in the courtroom. The pilot study 1 showed regular people were about 40 percent accurate. The pilot study 2 showed lawyers were 60 percent accurate. We were about 80 percent accurate. Clearly, we all want 100 percent, but when you read that we’re twice as good as your average person at taking down this dialect, it changes the spin. Later on, a Stanford study showed that automatic speech recognition had 20 percent error rate in “white speech,” 40 percent error rate in “black speech,” and worse with African American English dialect. When I graded the AAE example on their site, I saw that if it had been a steno test, it would be 20/100! It’s our skill and dedication that keeps us top quality in making the record and broadcast captioning.

Language Study and Service

Some may have read one of several articles from various outlets such as the New York Times, Philly, or Trib that, in summary, basically stated there’s a study that will be published in the Language journal. This study took a couple dozen volunteers and had them transcribe recorded statements that the news articles described as black dialect, but which I have also heard referred to described as urban English or street talk. The volunteers did not do well, and there was a high percentage of inaccuracy.

NCRA and PCRA responded to these articles openly. They basically called the title of the article(s) provocative, and pointed out that this involved volunteers taking down recorded statements as opposed to a live courtroom setting. They seem to believe as I do, that this study shows the desperate need for highly trained stenographers in courts.

Succinctly, there are two things the news often gets wrong: Law and science. We are talking about both at the same time, so I’m willing to go out on a limb and say that there are probably some inaccuracies or misconceptions about the actual study. Anecdotally, I look back to correspondence I’ve had with a researcher of a concept coined benevolent sexism. I actually wrote Jin Goh, a researcher in that study, and Jin Goh basically said we need more studies for the study to be conclusive, and the media misrepresented the study. The results were interesting and honest, but they did not mean that being nice to people was sexist as some news reported.

So of course the first step in understanding a study would be to read it and decide if reaching out to its authors is necessary. So I reached out to the group that publishes the Language journal only to learn that the study itself isn’t going to be published until June 2019.

So what can we say at this juncture? The study had a fairly small sample size, as best I can tell, of 27 volunteers. I am not entirely sure if those volunteers were stenographic reporters as of writing. This isn’t uncommon. We are in a world with finite resources and funding, so studies often don’t have the kind of money you’d need to reach conclusive results on any one topic over the course of one study. We can also note that this is, as best as I can tell, one of the first studies of its kind, so while it has interesting results, we need to remind ourselves these results are not, to our knowledge, representative of multiple studies over years. We need to remind ourselves that some researchers showed us months ago that studies and the news stories about them can be worded in a way that gets clicks, but not in a way that informs the reader. For example, a study was recently conducted that showed jumping out of a plane without a parachute does not increase your chance of injury. The catch? They jumped out of a parked plane. They did this intentionally to remind us all that news on studies can be slanted or cause misconceptions.

So what should we do? In my opinion, there’s only one thing to do. Open our eyes to the fact that a scientific study was conducted, and it’s apparent that humans mishear things a lot! Continue to adapt to different accents in our training and work. Continue to push to provide the best service possible to all lawyers, litigants, and caption consumers. I do think there’s a lot to be said for our performance in real courtroom settings. We ask for repeats all the time to make sure that what people say is honestly and accurately reflected, and that’s something they probably couldn’t get into a lab or study as easily. Perhaps with time we could even conduct our own study, and maybe it would find that the stenographer mishears less than the average person. Perhaps we’d find our hearing is average. We don’t know. That’s the point of studies.

Bottom line: Don’t let this thing ruffle your feathers. I saw a lot of reporters spew a lot of vitriol over the articles, and in the end, the theme of the articles were not “stenographers are bad,” but more “humans mishear things and we should be mindful of this in the administration of our courts because if the transcribers aren’t hearing it then it is likely the lawyers and judges aren’t either.” We’re good at what we do, and we’re better off proving that than attacking linguists on Twitter. We are better off making sure our service is the best service lawyers and litigants can find, period. Truthfully, researchers give us valuable insight into what we do, but it is we who perform every day who know what’s at stake for the lawyers, litigants, and judges we serve.

As an aside, I understand the verbiage of the headlines upset some readers and I agree that this all could’ve been written more artfully. I myself have used descriptors to try to explain the issue as it is and make it more clear for anyone that cares to read.


I am very excited to say another article was released which published the linguist’s name, Taylor Jones. Taylor Jones’s site has a lot of very specific examples that I think are eye opening and important for everyone to read and understand, including examples like, “when you tryna go to the store?” I am delighted to have come across Jones’s website and work, and will be reaching out for comment and clarification on this study to understand exactly what it is about and how reporters might improve training. Previously, I believed I’d have to wait until June to see the study. At a glance, according to a January 2019 blog post by Jones, it does appear that they utilized and/or surveyed Philadelphia court reporters that were actually working in the courts. It is stated that evaluated sentence-by-sentence, accuracy was just under 60%, and when evaluated word-by-word, accuracy was about 82%. Without having yet received comment from Jones, I can say I am incredibly impressed by the blog, and anyone with interest in this study and developing better verbatim records should definitely swing by it and read some of the stuff there. At first glance, this really may be more of an issue than I had believed, and I’d encourage every reader to keep an open mind. Notably, Jones states he has worked with Culture Point to come up with a training suite to address this issue.

April 2, 2019:

In order to be subscriber-friendly I have attached all future updates on this to a new a blog post.