A Primer on ASR and Machine Learning For Stenographers

There’s a lot of conjecture when it comes to automatic speech recognition (ASR) and its ability to replace the stenographic reporter or captioner. You may also see ASR referred to as NLP or natural language processing. An important piece of the puzzle is understanding the basics behind artificial intelligence and how complex problems are solved. This can be confusing for reporters because in any of the literature on the topic, there are words and concepts that we simply have a weak grasp on. I’m going to tackle some of that today. In brief, computer programmers are problem solvers. They utilize datasets and algorithms to solve problems.

What is an algorithm?

An algorithm is a set of instructions that tell a computer what to do. You can also think of it as computer code for this discussion. To keep things simple, computers must have things broken down logically for them. Think of it like a recipe. For example, let’s look at a very simple algorithm written in the Python 3 language:

Do not despair. I’m about to make this so easy for you.

Line one tells the computer to put the words “The stenographer is _.” on the screen. Line two creates something called a Stenographer, and the Stenographer is equal to whatever you type in. If you input the word awesome with a lowercase or uppercase “a” the computer will tell you that you are right. If you input anything else, it will tell you the correct answer was awesome. Again, think of an algorithm like a recipe. The computer is told what to do with the information or ingredients it is given.

What is a dataset?

A dataset is a collection of information. In the context of machine learning, it is a collection that is put into the computer. An algorithm then tells the computer what to do with that information. Datasets will look very different dependent on the problem that a computer programmer is trying to solve. As an example, for enhancing facial recognition, datasets may be comprised of pictures. A dataset may be a wide range of photos labeled “face” or “not face.” The algorithm might tell the computer to compare millions of pictures. After doing that, the computer has a much better idea of what faces “look like.”

What is machine learning?

As demonstrated above, algorithms can be very simple steps that a computer goes through. Algorithms can also be incredibly complex math equations that help a computer analyze datasets and decide what to do with similar data in the future. One issue that comes up with any complex problem is that no dataset is perfect. For example, with regard to facial recognition, there have been situations with almost 100 percent accuracy with lighter male faces and only 80 percent accuracy with darker female faces. There are two major ways this can happen. One, the algorithm may not accurately instruct the computer on how to handle the differences between a “lighter male” face and a “darker female” face. Two, the dataset may not equally represent all faces. If the dataset has more “lighter male” faces in this example, then the computer will get more practice identifying those faces, and will not be as good at identifying other faces, even if the algorithm is perfect.

Artificial intelligence / AI / voice recognition, for purposes of this discussion, are all synonymous with each other and with machine learning. The computer is not making decisions for itself, like you see in the movies, it is being fed lots of data and using that to make future decisions.

Why Voice Recognition Isn’t Perfect and May Never Be

Computers “hear” sound by taking the air pressure from a noise into a microphone and converting that to electronic signals or instructions so that it can be played back through a speaker. A dataset for audio recognition might look something like a clip of someone speaking paired with the words that are spoken. There are many factors that complicate this. Datasets might be focused on speakers that speak in a grammatically correct fashion. Datasets might focus on a specific demographic. Datasets might focus on a specific topic. Datasets might focus on audio that does not have background noises. Creating a dataset that accurately reflects every type of speaker in every environment, and an algorithm that tells the computer what to do with it, is very hard. “Training” the computer on imperfect datasets can result in a word error rate of up to 75 percent.

This technology is not new. There is a patent from 2000 that seems to be a design for audio and stenographic transcription to be fed to a “data center.” That patent was assigned to Nuance Communications, the owner of Dragon, in 2009. From the documents, as I interpret them, it was thought that 20 to 30 hours of training could result in 92 percent accuracy. One thing is clear: as far back as 2000, 92 percent accuracy was in the realm of possibility. As recently as April 2020, the data studied from Apple, IBM, Google, Amazon, and Microsoft was 65 to 80 percent accuracy. Assuming, from Microsoft’s intention to purchase Nuance for $20 billion, that Nuance is the best voice recognition on the market today, there’s still zero reason to believe that Nuance’s technology is comparable to court reporter accuracy. Nuance Communications was founded in 1992. Verbit was founded in 2016. If the new kid on the block seriously believes it has a chance of competing, and it seems to, that’s a pretty good indicator that Nuance’s lead is tenuous, if it exists at all. There’s a list of problems for automation of speech recognition, and even though computer programmers are brilliant people, there’s no guarantee any of them will be “perfectly solved.” Dragon trains to a person’s voice to get its high level of accuracy. It simply would not make economic sense to have hours of training a software to everyone who is going to speak in court forever until the end of time, and the process would be susceptible to sabotage or mistake if it was unmonitored and/or self-guided (AKA cheap).

This is all why legal reporting needs the human element. We are able to understand context and make decisions even when we have no prior experience with a situation. Think of all the times you’ve heard a qualified stenographer, videographer, or voice writer say “in 30 years, I’ve never seen that.” For us, it’s just something that happens, and we handle whatever the situation is. For a computer that has never been trained with the right dataset, it’s catastrophic. It’s easy, now, to see why even AI proponents like Tom Livne have said that they will not remove the human element.

Why Learning About Machine Learning Is Important For Court Reporters

Machine learning, or applications fueled by machine learning, are very likely to become part of our stenographic software. If you don’t believe me, just read this snippet about Advantage Software’s Eclipse AI Boost.

Don’t get out the pitchforks. Just consider what I have to blog.

If you’ve been following along, you’ve probably figured out, and it pretty much lays it out here, that datasets are needed to train “AI.” There are a few somewhat technical questions that stenographic reporters will probably want answered at some point:

  1. Is this technology really sending your audio up to the Cloud and Google?
  2. Is Google’s transcription reliable?
  3. How securely is the information being sent?
  4. Is the reporter’s transcription also being sent up to the Cloud and Google?

The reasons for answering?

  1. The sensitive nature of some of our work may make it unsuitable for being uploaded. To the extent stuff may be confidential, privileged, or ex parte, court reporters and their clients may simply not want the audio to go anywhere.
  2. Again, as shown in “Racial disparities in automated speech recognition” by Allison Koenecke, et al., Google’s ASR word error rate can be as high as 30 percent. Having to fix 30 percent of a job is a frightening possibility that could be more a hindrance than a help. I’m a pretty average reporter, and if I don’t do any defining on a job, I only have to fix 2 to 10 percent of any given job.
  3. If we assume that everyone is fine with the audio being sent to the cloud, we must still question the security of the information. I assume that the best encryption possible would be in use, so this would be a minor issue.
  4. The reporter’s transcription carries not only all the same confidential information discussed in point 1, but also would provide helpful data to make the AI better. Reporters will have to decide whether they want to help improve this technology for free. If the reporter’s transcription is not sent up with the audio, then the audio would only ostensibly be useful if human transcribers went through the audio, similar to what Facebook was caught doing two years ago. Do we want outside transcribers having access to this data?

Our technological competence changes how well we serve our clients. Nobody reading this needs to become a computer genius, but being generally aware of how these things work and some of the material out there can only benefit reporters. In one of my first posts about AI, I alluded to the fact that just because a problem is solvable does not mean it will be solved. I didn’t have any of the data I have today to assure me that my guess was correct. But I saw how tech news was demoralizing my fellow stenographers, and I called it as I saw it even though I risked looking like an idiot.

It’s my hope that reporters can similarly let go of fear and start to pick apart the truth about what’s being sold to them. Talk to each other about this stuff, pros and cons. My personal view, at this point, is that a lot of these salespeople saw a field with a large percentage of women sitting on a nice chunk of the “$30 billion” transcription industry, and assumed we’d all be too risk averse to speak out on it. Obviously, I’m not a woman, but it makes a lot of sense. Pick on the people that won’t fight back. Pick on the people that will freeze their rates for 20 or 30 years. Keep telling a lie and it will become the truth because people expect it to become the truth. Look how many reporters believe audio recording is cheaper even when that’s not necessarily true.

Here’s my assumption: a little bit of hope and we’ve won. Decades ago, a scientist named Richter did an experiment where rats were placed in the water. It took them a few minutes to drown. Another group of rats were taken out of the water just before they drowned. The next time they were submerged, they swam for hours to survive. We’re not rats, we’re reporters, but I’ve watched this work for humans too. Years ago, doctors estimated a family member would live about six more months. We all rallied around her and said “maybe they’re wrong.” She went another three years. We have a totally different situation here. We know they’re wrong. Every reporter has a choice: sit on the sideline and let other people decide what happens or become advocates for the consumers we’ve been protecting for the last 140 years, before the stenotype design we use today was even invented. People have been telling stenographers that their technology is outdated since before I was born, and it’s only gotten more advanced since that time. Next time somebody makes such a claim, it’s not unreasonable for you to question it, learn what you can, and let your clients know what kind of deal they’re getting with the “new tech.”

Addendum 4/27/21:

Some readers checked in with the Eclipse AI Boost, and as it was relayed to me, the agreement is that Google will not save the audio and will not be taking the stenographic transcriptions. Assuming that this is true, my current understanding of the tech is that stenographers would not be helping improve the technology by utilizing this technology unless there’s some clever wordplay going on, “we’re not saving the audio, we’re just analyzing it.” At this point, I have no reason to suspect that kind of a game. In my view, our software manufacturers tend to be honest because there’s simply no truth worth getting caught in a lie over. The worst I have seen are companies using buzzwords to try to appease everyone, and I have not seen that from Advantage.

Admittedly, I did not reach out to Advantage myself because this was meant to assist reporters with understanding the concepts as opposed to a news story. But I’m very happy people took that to heart and started asking questions.

How We Discuss Errors and Automatic Speech Recognition

As a stenographic court reporter, I have been amazed by the strides in technology. Around 2016, I, like many of you, saw the first claims that speech recognition was as good as human ears. Automation seemed inevitable, and a few of my most beloved colleagues believed there was not a future for our amazing students. In 2019, the Testifying While Black study was published in the Language Journal, and while the study and its pilot studies showed that court reporters were twice as good at understanding the AAVE dialect as your average person, even though we have no training whatsoever in that dialect, the news media focused on the fact that we certify at 95 percent and yet only had 80 percent accuracy in the study. Some of the people involved with that study, namely Taylor Jones and Christopher Hall, introduced Culture Point, just one provider that could help make that 80 percent so much higher. In 2020, a study from Stanford showed that automatic speech recognition had a word error rate of 19 percent for “white” speakers, 35 percent for “black” speakers, and “worse” for speakers with a high dialect density. How much worse?

The .75 on the left means 75 percent. DDM is the dialect density. Even with fairly low dialect density, we’re looking at over 50 percent word error rate.

75 percent word error rate in a study done three or four years after the first claim that automatic speech recognition had 94 percent accuracy. But in all my research and all that has been written on this topic, I have not seen the following point addressed:

What Is An Error?

NCRA, many years ago, set out guidelines for what constituted an error. Word error guidelines take up about a page. Grammatical error guidelines take up about a page. What this means is that when you sit down for a steno test, you’re not being graded on your word error rate (WER), you’re being graded on your total errors. We have decades of failed certification tests where a period or comma meant a reporter wasn’t ready for the working world yet. Even where speech recognition is amazing on that WER, I’ve almost never seen appreciable grammar, punctuation, Q&A, or anything that we do to make the transcript readable. It’s so bad that advocates for the deaf, like Meryl Evans, refer to automatic speech recognition as “autocraptions.”

Unless the bench, bar, and captioning consumers want word soup to be the standard, the difference in how we describe errors needs to be injected into the discussion. Unless we want to go from a world where one reporter, perhaps paired with a scopist, completes the transcript and is accountable for it, to a world where up to eight transcribers are needed to transcribe a daily, we need to continue to push this as a consumer protection issue. Even where regulations are lacking, this is a serious and systemic issue that could shred access to justice. We have to hit every medium possible and let people know the record — in fact, every record in this country — could be in danger. The data coming out is clear. Anyone selling recording and/or automatic transcription says 90-something percent accuracy. Any time it’s actually studied? Maybe 80 percent accuracy, maybe 25; maybe they hire a real expert transcriber, or maybe they outsource all their transcription to Kenya or Manila. Perception matters; court administrators are making industry-changing decisions based on the lies or ignorance of private sector vendors.

The point is recording equipment sellers are taking a field which has been refined by stenographic court reporters to be a fairly painless process where there are clear guidelines for what happens when something goes wrong, adding lots of extra parts to it, and calling it new. We’ve been comparing our 95 percent total accuracy to their “94 percent” word error rate. In 2016, perhaps there were questions that needed answering. This is April 2021, there’s no contest, and proponents of digital recording and automatic transcription have a moral obligation to look at the facts as they are today and not what they’d like them to be.

If you are a reporter that wants more information or ideas on how to talk about these issues with clients, check out the NCRA Strong Resource Library, and Protect Your Record Project. Even reporters that have never engaged in any kind of public speaking can pick up valuable tips on how to educate the public about why stenographic reporting is necessary. Lawyers, litigants, and everyday people do not have time to go seeking this information; together, we can bring it to them.

Aggressive Marketing — Growth or Flailing?

During our Court Reporting & Captioning Week 2021 there were a couple of press releases and some press releases dressed up as journalism all about digital recording, automatic speech recognition, and its accuracy and viability. There’s actually a lesson to be learned from businesses that continually promise without any regard for reality, so that’s what I’ll focus on today. I’ll start with this statement. We have a big, vibrant field of students and professionals where everyone that is actually involved in it, from the smallest one-woman reporting armies to the corporate giants, says technology will not replace the stenographic court reporter. Then we have the tech players who continuously talk about how their tech is 99 percent accurate, but can’t be bothered to sell it to us, and whose brilliant plan is to record and transcribe the testimony, something stenographers figured out how to do decades ago.

Steno students are out there getting a million views and worldwide audiences…
And Chris Day? He’s posting memes on the internet.

You know the formula. First we’ll compare this to an exaggerated event outside the industry, and then we’ll tie it right into our world. So let’s breeze briefly over Fyre Festival. To put it in very simple terms, Fyre Festival was an event where the CEO overpromised, underdelivered, and played “hide the ball” until the bitter end. Customers were lied to. Investors were lied to. Staff and construction members were lied to. It was a corporate fiasco propped up by disinformation, investor money, and cash flow games that ended with the CEO in prison and a whole lot of people owed a whole lot of money that they will, in all likelihood, never get paid. It was the story of a relative newcomer to the industry of music festivals saying they’d do it bigger and better. Sound familiar?

As for relative newcomers in the legal transcription or court reporting business, take your pick. Even ones that have been incorporated for a couple of decades really aren’t that impressive when you start holding up the magnifying glass. Take, for example, VIQ Solutions and its many subsidiaries:

I promise to explain if you promise to keep reading.

VIQ apparently trades OTC so it gives us a rare glimpse of financial information that we don’t get with a lot of private companies. Right off the bat, we can see some interesting stuff. $8 million in revenue with a negative net income and a positive cash flow. Positive cash flow means the money they have on hand is going up. Negative income means the company is losing money. How does a company lose money but continue to have cash on hand grow? Creditors and investors. When you see money coming in while the company is taking losses, it generally means that the company is borrowing the money or getting more cash from investors/shareholders. A company can continue on this way for as long as money keeps coming in. Companies can also use tricks similar to price dumping, and charge one client or project an excessive amount in order to fund losses on other projects. The amazing thing is that most companies won’t light up the same way Fyre did, they’ll just declare bankruptcy and move on. There’s not going to be a big “gotcha” parade or reckoning where anyone admits that stenographic court reporting is by far the superior business model.

This is juxtaposed against a situation where, for the individual stenographic reporter, you’re kind of stuck making whatever you make. If things go badly, bankruptcy is an option, but there’s never really an option to borrow money or receive investor money for decades while you figure it out. Seeing all these ostensible giants enter the field can be a bit intimidating or confusing. But any time you see these staggering tech reveals wrapped up in a paid-for press release, I urge you to remember Fyre, remember VIQ, and remember that no matter what that revenue or cash flow looks like, you may not have access to the information that would tell you how the company is really doing.

This also leads to a very bright future for steno entrepreneurs. As we learn the game, we can pass it along to each other. When Stenovate landed its first big investor, I talked about that. Court reporting and its attached services, in the way we know them and love them, are an extremely stable, winning investment. Think about it. Many of us, when we begin down this road, spend up to $2,000 on a student machine and up to $9,000 on a professional machine and software. That $11,000 sinkhole, coupled with student loan debt, grows into stable, positive income. So what’s stopping any stenographic court reporting firm from getting out there and educating investors on our field? The time and drive to do it. Maybe for some people, they just haven’t had that idea yet. But that’s where we’re headed. I have little doubt that if we compete, we will win. But we have to get people in that mindset. So if you know somebody with that entrepreneurial spirit, maybe pass them this post and get them thinking about whether they’d like to seek investors to grow their firm and reach. Business 101 is that a dollar today is more valuable than a dollar tomorrow. That means our field can be extremely attractive to value investors and be a safe haven from the gambling money being supplied to “tech’s” habitual promisors.

Know a great reporting or captioning firm that needs a spotlight? Feel free to write me or comment about them below. I’ll start us off. Steno Captions, LLC launched off recently without doing the investor dance. That’s the kind of promise this field has. I wish them a lot of luck and success in managing clients and training writers.

Turning Omissions Into Opportunity

We’re in an interesting time. Pretty much anywhere you look there are job postings for digital reporters, articles with headlines talking about our replacement, articles with headlines talking about our angst. Over time, brilliant articles from people like Eric Allen, Ana Fatima Costa, Angie Starbuck (bar version), and Stanley Sakai start to get buried or appear dated when, in actuality, not much has changed at all. They’re super relevant and on point. Unfortunately, at least for the time being, we’re going to have to use our professional sense, think critically, and keep spreading the truth about ourselves and the tech we use.

One way to do that critical thinking is to look squarely at what is presented and notice what goes unmentioned. For example, look back at my first link. Searching for digital reporting work, ambiguous “freelance” postings come up, meaning stenographer jobs are actually branded as “digital” jobs. District courts seeking a stenographer? Labeled as a digital job. News reporters to report news about court? Labeled as a digital job. No wonder there’s a shortage, we’re just labeling everything the same way and expecting people who haven’t spent four decades in this business to figure it out. In this particular instance, Zip Recruiter proudly told me there were about 20 digital court reporter jobs in New York, but in actuality about 90 percent were mislabeled.

Another way to do it is to look at contradictions in a general narrative. For example, we say steno is integrity. So there was an article from Lisa Dees that shot back and said, basically, any method can have integrity. Can’t argue there. Integrity is kind of an individual thing. But to get to the conclusion these things are equal, you have to ignore a lot of stuff that anyone who’s been working in the field a while knows. Stenography has a longer history and a stronger culture. With AAERT pulling in maybe 20 percent of what NCRA does on the regular, who has more money going into ethics education? Most likely stenographers. When you multiply the number of people that have to work on a transcript, you’re multiplying the risk of one of those people not having integrity. We’re also ignoring how digital proponents like US Legal have no problem going into a courtroom and arguing that they shouldn’t be regulated like court reporters because they don’t supply court reporting services. Even further down the road of integrity, we know from other digital proponents that stenography is the gold standard (thanks, Stenograph) and that the master plan for digital proponents is to use a workforce that is not highly trained. I will totally concede that these things are all from “different” sources, but they all point to each other as de facto experts in the field and sit on each other’s boards and panels. It’s very clear there’s mutual interest. So, again, look at the contradictions. “The integrity of every method is equal, but stenography is the gold standard, but we are going to use a workforce with less training.” What?

Let’s get to how to talk about this stuff, and for that, I’m going to leave an example here. I do follow the court reporting stuff that gets published by Legaltech News. There’s one news reporter, Victoria Hudgins, who has touched on steno and court reporting a few times. I feel her information is coming mostly from the digital proponents, so in an effort to provide more information, I wrote:

“Hi Ms. Hudgins. My name’s Christopher Day. I’m a stenographer in New York. I follow with great interest and admiration most of your articles related to court reporting in Legal Tech News [sic]. But I am writing today to let you know that many of the things being represented to you by these companies appear false or misleading. In the August 24 article about Stenograph’s logo, the Stenograph offices that you were given are, as best I can tell, a stock photo. In the September 11 article about court reporter angst, Livne, says our field has not been digitized, but that’s simply not true. Court reporter equipment has been digital for decades. The stenotype picture you got from Mr. Rando is quite an old model and most of us do not use those anymore. I’m happy to send you a picture of a newer model, or share evidence for any of my statements in this communication.

Our position is being misrepresented very much. We are not worried so much about the technology, we are more worried that people will believe the technology is ready for prime time and replace us with it without realizing that it is not. Livne kind of admitted this himself. In his series A funding, he or Verbit stated that the tech was 99 percent accurate. In the series B funding he said Verbit would not get rid of the human element. These two statements don’t seem very compatible.

How come when these companies are selling their ASR, it’s “99 percent” or “ready to disrupt the market,” but when Stanford studied ASR it was, at best, 80 percent accurate?

Ultimately, if the ASR isn’t up to the task, these are transcription companies. They know that if they continue to use the buzzwords, you’ll continue to publish them, and that will draw them more investors.

I am happy to be a resource on stenographic court reporting technology, its efficiency, and at least a few of the things that have been done to address the shortage. Please feel free to reach out.”

To be very fair, because of the limitations of the website submission form, she didn’t get any of the links. But, you know, I think this stands as a decent example of how to address news people when they pick up stories about us. They just don’t know. They only know what they’re told or how things look. There will be some responsibility on our part to share our years of experience and knowledge if we want fair representation in media. It’s the Pygmalion effect at work. Expectations can impact reality. That’s why these narratives exist, and that is why a countering narrative is so important. Think about it. When digital first came it was all about how it was allegedly cheaper. When that turned out not to be true, it became a call for stenographers to just see the writing on the wall and acknowledge there is a shortage and that there is nothing we can do about it. Now that’s turning out not to be true, we’re doing a lot about it, and all we have left is to let those outside the industry know the truth.

Addendum:

A reader reminded me that Eric Allen’s article is now in archive. The text may be found here. For context purposes, it came amid a series of articles by Steve Townsend, and is an excellent example of what I’m talking about in terms of getting the truth out there.