The Astonishing Reasons Why Your LLM is a Bad Notetaker

We’ve all been there. You gather your team for a meeting, you make a bunch of decisions that lead to a series of follow-up action items. Then a week goes by, you meet again and nobody remembers what on earth you decided and none of the action items have been closed out. And right there you just wasted hours of your time, the team’s time, and most importantly the company’s time. That, my friends, is why we take meeting notes!

Capturing meeting notes and more importantly, the action items that result from them, is critical for a high-functioning team. But there is a downside. Taking notes while simultaneously participating in a meeting is difficult and you usually wind up focusing on one task or the other. So the prospect of a large language model (LLM) being able to take a meeting transcript and produce an accurate list of action items is insanely attractive. Too bad you’ll find it doesn’t do an excellent job.

What is an Action Item

Before we dive into the details of why LLMs struggle to capture action items, it’s worth defining what an action item is. A quick search on Google will find you a dozen or so similar definitions. For the sake of this post, I prefer the definition given by our friends at Asana:

An action item is a task that is created from a meeting with other stakeholders to move a project towards its goal

https://asana.com/resources/action-items

That’s a pretty good definition. But truthfully if you interview 100 people and ask them what the action items are for a given meeting you’ll get 100 different answers. Why? It turns out that the complete list of action items from any given meeting is wildly dependent on the purpose of the meeting, the nature of a given project, the type of work done by an organization, and sometimes the simple subjectivity of the notetaker.

Anyone who has worked on machine learning projects knows you can’t teach a machine to learn random human behavior. There has to be structure in what you are trying to teach the machine even if we humans can’t fully articulate it. So at Xemby, we’ve adopted a slightly more precise definition that draws clear lines between what is and is not an action item.

A commitment, made by a meeting attendee or on behalf of another person to do future work related to the agenda of the meeting or business of the organization.

Why the different definition? Action items from a 1:1 meeting may not be project-based. It may cover multiple projects or self-improvement tasks. A commitment to walk the dog after the meeting may be irrelevant for an engineering standup but critically important for a veterinary office. The definition above gives us the best chance of getting 100 people to agree on the action items from a given meeting.

Why LLMs Struggle with Action Item Detection

There are a host of reasons why LLMs fail to accurately capture the action items from a meeting with sufficient precision and recall. However, they fall into a few key areas:

  • Information isn’t encoded in the text
  • Lack of social awareness
  • Difficulty in doing abstract calculations

Let’s dive into each of these individually.

Information isn’t encoded in the text

I’ve discussed this issue in an earlier blog post, but to reiterate an LLM is just predicting the next word or token based on previous words or tokens. If the information necessary to predict the next word isn’t contained in the earlier text (or encoded in the base model) the LLM will likely not give you a high-value output. There is a variety of information that may not be contained in the text of a given conversation, but let’s focus on two in particular, outside context and visual information.

Outside Context

Let’s assume a manager passes by an employee earlier in the day to discuss a possible new project. Subsequently, in a later 1:1 meeting, the manager says “Remember that project we discussed, I want you to do it”. This is a situation where the context of the project is not contained in the text so there is no way for the LLM to know that this is a precise action item. The LLM will either struggle to classify this as an action item or at best return a vague and ambiguous action time that isn’t of much use.

But missing context isn’t limited to nonspecific references. A lack of a corporate “world model” can have all sorts of implications. For example “walking the dog” may be an action item if you work at a vet, or just a passing comment in a standup meeting. Sarcasm may also be difficult to discern without a larger working knowledge of what is going on in an organization.

Visual Information

It is very common to have working meetings. In those meetings, some proportion of the tasks will be acted upon in the meeting. While others may be future work commitments. That isn’t always obvious unless you have access to any associated visual information. For example, someone saying “I’m going to update that row in the spreadsheet” may or may not be doing so right at that moment. But often the text alone is insufficient to identify that a meeting participant has already taken action on a task. You often need additional visual information to confirm whether a given item is a future commitment to do work.

Social Awareness

We humans are funny creatures. For a host of reasons I won’t get into here, we often like to be non-committal. That means we will often hedge our commitments so we can’t be held accountable. That ultimately has meaningful impacts on any model identifying action items. There are two techniques humans tend to use to avoid accountability that LLMs struggle with, the royal we and vague/ambiguous tasks.

The Royal We

The best way of avoiding ownership of a given task is to suggest that “we” should do the task. Because “we” means everybody or anyone and therein lies the problem. Sometimes “we” really is the royal we. If I say “We all need to complete our expense reports by the end of the week”, that likely really means everyone in the meeting owns the task. However, if I say “We will get back to you next week”, we means “I” but I hedged on ownership. This makes it incredibly difficult for an LLM to understand if these types of tasks are actually action items and if they are, who the owner should be.

Vague and Ambiguous Tasks

The other way humans hedge on accountability is to provide vague or ambiguous task descriptions. For example “I’m going to do something about that” or “I should really look into that”. The problem with the first example is that the task is very unclear. The problem with the second example is that I used the hedge word “should”. In both these cases, it is unclear from the text if they are relevant action items. That means LLMs generally have to guess and usually do so with 50/50 accuracy at best.

Abstract Calculations

Last but not least, LLMs do poorly at abstract calculations. The thing about action items is that they often have a due date but those dates are often relative (e.g. “I’ll send you that on Tuesday”). Converting a relative date like Next Tuesday to April 2nd, 2024 requires abstract calculations, and ultimately this is not something LLMs excel at. As I’ve commented in the past, LLMs struggle to even understand a leap year, so how can they accurately provide due dates?

Summarizing the Action Items from this Post

Well if an LLM isn’t good enough on its own for capturing action items for meeting notes what should you do? At Xembly we’ve found that you need to augment an LLM with additional models to truly get close to 100% precision and recall when identifying action items.

Specifically, we’ve found it necessary to be more permissive in what we call action items and subsequently use ranking models for ordering them by likelihood. This gives the end user the ability to quickly make those 50/50 judgment calls with just a click of a button. We have also built dedicated models for due date and owner detection that perform far more accurately than what you will get out of the box with an LLM. Finally, wherever possible we’ve tried to connect our evaluation to data sources (knowledge graphs/world models) that extend beyond the conversation.

Ultimatly, LLMs can be an incredibly helpful tool for building a notetaking solution. But you’ll have a few action items on your plate to augment the technology if you want sufficient accuracy to delight your users.

Introducing Task-Oriented Multiparty Conversational AI: Inviting AI to the Party

The term “conversational AI” has been around for some time. There are dozens of definitions all over the internet. But let me refresh your memory with a definition from NVIDIA’s website.

Conversational AI is the application of machine learning to develop language-based apps that allow humans to interact naturally with devices, machines, and computers using speech

https://www.nvidia.com/en-us/glossary/conversational-ai

There’s nothing wrong with that definition except for one small misleading phrase: “… allow humans to interact …”. What that should say is: “… allow a human to interact …”. Why? Because every interaction you’ve ever had with a conversational AI system has been one-on-one.

Sure, you and your kids can sit around the kitchen table blurting out song titles to Alexa (“Alexa, play the Beatles,” “No Alexa, play Travis Scott,” “No Alexa, play Olivia Rodrigo.” …). Alexa may even acknowledge each request, but she isn’t having a conversation with your family. She’s indiscriminately acknowledging and transacting on each request as if they’re coming in one by one, all from the same person.

And that’s where multiparty conversational AI comes into play.

What is Multiparty Conversational AI

With a few small tweaks, we can transform our previous definition of conversational AI to one that accurately defines multiparty conversational AI.

Multiparty conversational AI is the application of machine learning to develop language-based apps that allow AI agents to interact naturally with groups of humans using speech

While the definitions may appear similar, they are fundamentally different. One implies a human talking to a machine, while our new definition implies a machine being able to interact naturally with a group of humans using speech or language. This is the difference between one-on-one interactions versus an AI agent interacting in a multiparty environment.

Multiparty conversational AI isn’t necessarily new. Researchers have been exploring multiparty dialog and conversational AI for many decades. I personally contributed to early attempts at building multiparty conversational AI into video games with the Kinect camera nearly fifteen years ago.1 But sadly no one has been able to solve all the technical challenges associated with building these types of systems and there has been no commercial product of note.

What about the “Task-Oriented” part?

You may have wisely noted that I have not mentioned the words “task-oriented” contained in the title of this post. Conversational AI (sometimes also called dialog systems) can be divided into two categories, open-domain and task-oriented.

Open-domain systems can talk about any arbitrary topic. The goal is not necessarily to assist any particular action, but rather engage in arbitrary chitchat. Task-oriented systems are instead focused on solving “tasks”. Siri and Alexa are both task-oriented conversational AI systems.

In multiparty systems, tasks become far more complicated. Tasks are usually the output of a conversation where a consensus is formed that necessitates action. Therefore any task-oriented multiparty conversational AI system must be capable of participating in forming a consensus or it will risk taking action before it is necessary to do so

Multiparty Conversational AI, What is it Good For?

“Absolutely Everything!” Humans are inherently social creatures. We spend much of our time on this planet interacting with other humans. Some have even argued that humans are a eusocial species (like ants and bees) and that our social interactions are critical to our evolutionary success. Therefore, for any conversational AI system, to truly become an integral part of our lives, it must be capable of operating amongst groups of humans.

Nowhere is this more evident than in a corporate work environment. After all, we place employees on teams, they have group conversations on Slack/Teams and email, and we constantly gather small groups of people in scheduled or ad-hoc meetings. Any AI system claiming to improve productivity in a work environment will ultimately need to become a seamless part of these group interactions.

Building Task-Oriented Multiparty Conversational AI Systems

There is a litany of complex problems that need to be solved to reliably build a task-oriented multiparty conversational AI system that would be production-worthy. Below is a list of the most critical areas that need to be addressed.

  • Task detection and dialog segmentation
  • Who’s talking to whom
  • Semantic parsing (multi-turn intent and entity detection)
  • Conversation dissentanglement
  • Social graphs and user/organization preferences
  • Executive function
  • Generative dialog

In the next sections, we’ll briefly dive deeper into each of these areas.

Task Detection and Dialog Segmentation

In a single-party system such as Alexa or Siri task detection is quite simple. You address the system (“Hey Siri …” ) and everything you utter is assumed to be a request to complete a task (or follow up on a secondary step needed to complete a task). But in multiparty conversations, detecting tasks2 is far more difficult. Let’s look at two dialog segments below

Two aspects of these conversations make accurately detecting tasks complex.:

  • In the first dialog, our agent Xena, is an active part of the conversation, and the agent is explicitly addressed. However, in the second conversation, our agent passively observed a task assigned to someone else and subsequently proactively offered assistance. That means we need to be able to detect task-oriented statements (often referred to as a type of dialog act) that might not be explicitly addressed to the agent.
  • The second issue is that the information necessary to complete either of these tasks is contained outside the bounds of the statement itself. That means we need to be able to segment the dialog (dialog segmentation) to capture all the utterances that pertain to the specific task.

Beyond the two challenges above there is also the issue of humans often making vague commitments or hedging on ownership. This presents additional challenges as any AI system must be able to parse whether a task request is definitive or not and be able to handle vague tasks or uncertain ownership.

Who’s Talking to Whom

To successfully execute the task in a multiparty conversation we need to know who is making the request and to whom it is assigned. This raises another set of interesting challenges. The first issue is, how do we even know who is speaking in the first place?

In a simple text-based chat in Slack, it is easy to identify each speaker. The same is true of a fully remote Zoom meeting. But what happens when six people are all collocated in a conference room? To solve this problem we need to introduce concepts like blind speaker segmentation and separation and audio fingerprinting.

But even after we’ve solved the upfront problem of identifying who is in the room and speaking at any given time there are additional problems associated with understanding the “whom”. It is common to refer to people with pronouns and in a multiparty situation you can’t just simply assume “you” is the other speaker. Let’s look at a slightly modified version of one of the conversations we presented earlier.

The simple assumption would be that the previous speaker (User 2) is the “whom” in this task statement. But after analyzing the conversation it is clear that “you” refers to User 1. Identifying the owner or “whom” in this case requires concepts like coreference resolution (who does “you” refer to elsewhere in the conversation) to correctly identify the correct person.

Semantic Parsing

Semantic parsing, also sometimes referred to as intent and entity detection, is an integral part of all task-oriented dialog systems. However, the problem gets far more complex in multiparty conversations. Take the dialog in the previous section. A structured intent and entity JSON block might look something like this:

{
    "intent": "schedule_meeting",
    "entities": {
        "organizer": "User 1",
        "attendees": [
            "User 2",
            "User 3"
        ],
        "title": "next quarter roadmap",
        "time_range": "next week"
    }
}

Note that all of the details in this JSON block did not originate from our task-based dialog act. Rather the information was pulled from multiple utterances across multiple speakers. Successfully achieving this requires a system that is exceptionally good at coreference resolution and discourse parsing.

Conversation Disentanglement

While some modern chat-based applications (e.g. Slack) have concepts of threading that can help isolate conversations, we can’t guarantee that any given dialog is single-threaded. Meetings are nonthreaded and chat conversations can contain multiple conversations that are interspersed with each other. That means any multiparty conversational AI system must be able to pull apart these different conversations to transact accurately. Let’s look at another adaptation of a previous conversation:

In this dialog, two of our users have started a separate conversation. This can lead to ambiguity in the last request to our agent. User 3 appears to be referring to the previous meeting we set up, but knowing this requires we separate (or disentangle) these two distinct conversations so we can successfully handle subsequent requests.

Social / Knowledge Graph and User Preferences

While this might not be obvious, when you engage in any multiparty conversation you are relying on a database of information that helps inform how you engage with each participant. That means any successful multiparty conversational AI system needs to be equally aware of this information. At a bare minimum, we need to know how each participant relates to each other and their preferences associated with the supported tasks. For example, if the CEO of the company is part of the conversation you may want to defer to their preferences when executing any task.

Executive Function

Perhaps most importantly, any task-oriented multiparty conversational AI system must have executive function capabilities. According to the field of neuropsychology, executive function is the set of cognitive capabilities humans use to plan, monitor, and execute goals.

Executive function is critically important in a multiparty conversation because we need to develop a plan for whether we immediately take action on any given request or if we must seek consensus first. Without these capabilities, an AI system will just blindly execute tasks. As described earlier in this post this is exactly how your Alexa behaves today. If you and your kids continuously scream out “play <song name x>” it will just keep changing songs without any attempt to build consensus and the interaction with the conversational AI system will become dysfunctional. Let’s look at one more dialog interaction.

As you can see in the example above our agent just didn’t automatically transact on a request to move the meeting to Wednesday. Instead, the agent used its executive function to do a few things:

  • Recognize that the second request was not the request originator
  • Preemptively pull back information about whether the proposal was viable
  • Seek consensus with the group before executing

Achieving this capability requires the gathering of data previously collected, developing a plan, and then executing against that plan. So for a task-oriented multiparty conversational AI system to correctly operate within a group, it must have executive function capabilities.

Generative Dialog Engine

Last but not least, any conversational AI system must be able to converse with your users. However, because the number of people in any given conversation and their identities are not predictable and our executive functions can cause a wide array of responses, no predefined or templated list will suffice for generating responses. A multiparty system will need to take all our previously generated information and generate responses on demand.

Wait, Don’t Large Language Models (LLMs) Solve This

With all the hype, you’d think LLMs could solve the problem of task-oriented multiparty conversational AI – and weave straw into gold. But it turns out that, at best, LLMs are just a piece in a much larger puzzle of AI technology needed to solve this problem.

There are basic problems like the fact that LLMs are purely focused on text and can’t handle some of the speaker identification problems discussed earlier. But even more importantly there is no evidence that LLMs have native abilities to understand the complexities of social interactions and plan their responses and actions based on that information.

It will require a different set of technologies, perhaps leveraging LLMs in some instances, to fully build a task-oriented multiparty conversational AI system.

So When Can I Invite an AI to Join the Party

While I can’t say anyone has solved all the challenges discussed in this post, I can say we are very close. My team at Xembly has developed what we believe is the first commercial product capable of participating in multiparty conversations as both a silent observer and an active participant. Our AI agent can join in-person meetings or converse with a group in Slack while also helping complete tasks that arise as a byproduct of these conversations.

We are only just beginning to tackle task-oriented multiparty conversational AI. So we may not be the life of the party, but go ahead and give Xembly and our Xena AI agent a try. The least you can do is send us an invite!

  1. With the Kinect Camera, we hoped to individually identify speakers in a room so each user could independently interact with the game. You can read more details about or work in this space here: 1, 2, 3, 4 ↩︎
  2. Are tasks in multiparty conversations just action items? Yes, since an action item is generally defined as a task that arises out of a group’s discussion. I’ll be writing a larger deep dive into action item detection in a future post. ↩︎

Generative AI – Prolific Copyright Infringer?

poor man's copyright.  Original music mailed to myself (Jason Flaks) via certified mail.

So, you might be wondering, “What makes this guy fit to pen an article about generative AI and copyright infringement?” I mean, I’m no copyright lawyer, nor do I moonlight as one on television. But I do bring a unique viewpoint to the table.  After all, I’ve dabbled in the music industry and have been up to my elbows in machine-learning projects for a good chunk of my career. But perhaps my best qualification is my long-standing fascination with copyright law, which started when I was just a kid. That image up top isn’t some AI-generated piece from Midjourney; they’re my earnest attempts at copyrighting my original music over three decades ago using the Poor Man’s copyright approach.

So, What’s Copyright, and How Do I Get One?

Before we dive into the meaty debate of whether Generative AI infringes on anyone’s copyright, let’s clarify what copyright means. According to the US Copyright Office, copyright is a “type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression.” In simpler terms, copyright asserts your ownership of any intangible creations of human intellect (e.g., music, art, writing, etc).  The moment you fix your creation to a physical form (e.g., MP3 file, canvas, video recording, piece of paper, etc.), you have a copyright.

Amazingly, you don’t need to register for an official copyright to have one. So, why register a copyright? The Supreme Court has decided that to sue for copyright infringement, you must have registered your copyright with the US copyright office. However, they’ve also clarified that registering to sue is separate from the date of creation. This means I could register for a copyright for this blog post years from now but still sue for any infringement prior, as long as I can prove the date of creation.

How Can Generative AI Infringe on a Copyright?

There’s been a lot of talk about generative AI infringing on copyright protections, but many of these discussions oversimplify the issue. There are actually three different ways Generative AI can infringe on your copyright, some favoring artists/creators and some more favorable to the generative AI companies.

  • Theft (copying) of copyrighted material
  • Distribution of copyrighted material
  • Use of copyrighted material in derivative or transformative works

On Theft (Copying) of Copyrighted Material

Let’s get real; while there are numerous court cases establishing the legitimate right to duplicate copyrighted material under the fair use doctrine, the default assumption is and should be that it is illegal to do so.  Therefore, if we can determine that generative AI companies are using copies of content they did not pay for or get permission to use and using that content in a way that falls outside fair use, then we can assume they are stealing copyrighted material.

There is little or no debate that generative AI companies are using copyrighted material.  After all, OpenAI basically admits to lifting its training data from content “that is publicly available on the internet” on its website. And as I discussed earlier just about anything newly written on the internet has an inherent copyright.  But beyond possible scraping of my blog post, there is sufficient evidence that generative AI companies have ingested copyrighted books, images, and more. 

And if you need any more proof, look at the image below where I attempted to elicit lyrics from Bob Dylan’s Blowin’ in the Wind from ChatGPT.  ChatGPT both knew the lyrics I provided were from the song, and ChatGPT was able to quote a portion of the lyrics I did not provide.  It can only do that because it has seen the lyrics before in its training data set.

ChatGPT prompted to generate lyrics from Bob Dylan's "Blowin in teh Wind"

If there is no question that copyrighted material was used in the training process, then we only need to assess whether the copying should be considered fair use. There are multiple justifications for fair use in copyright law.  Some are easy to interpret, and others more difficult.  Items like research or scholarly use are reasonably easy to assess, and I can find no fair argument that generative AI companies are using copyrighted material in either capacity.

So, that leaves the last question in fair use: does the copying materially impact the monetization of the content?   And I think the answer here again is quite simple: YES!  The easiest example I can give is the artwork I regularly use in my blog posts.  I’ve traditionally paid for the art I use via services like Dreamstime.  If Midjourney or Stable Diffusion trained on this type of art and I subsequently generate my blog post art via their services, I may never pay for art via Dreeamstime or other similar services again.  And in doing so, those artists have lost a way to monetize their art, and they are not equally compensated by the generative AI companies.

On Distribution of Copyrighted Material

If you’re old like me, you may remember those FBI copyright warnings that regularly made an appearance on DVDs and VHS tapes.

The unauthorized reproduction or distribution of this copyrighted work is illegal …

The issue of whether these systems distribute the content in its original form with little transformation is a big one. This distribution can occur in two ways: to end customers and to data annotators.

To end customers

Generative AI models are basically next-word (pixel, etc.) predictors. They aim to provide the most statistically likely next word based on a previous sequence of words. As a result, these models will, without any special adaptations, spit back exact copies of text, images, etc., especially in low-density areas.  As you can see from the image in the previous section, while OpenAI has been proactively trying to adapt the system not to distribute copyrighted material, I was still able to get it to do so with very little effort on my part.

So while these generative AI systems will continue to try and put mitigations in place to prevent the distribution of copyrighted content, there is little or no debate that they have been doing so all along.  And they are likely to continue doing so as it is impossible to close every hole in the system.

To data annotators

OpenAI and others use reinforcement learning from human feedback (RLHF) to improve their models. RLHF requires that outputs from an original model are shown to human annotators to help build a reward model that leads to better outputs from the generative model. If these human annotators were shown copyrighted material, in an effort to reward the model for not doing so in the future, OpenAI and other generative AI companies would clearly be distributing copyrighted content.

You might ask, “Shouldn’t copyright holders be happy that OpenAI is trying to train their models not to distribute copyrighted content?”  Well, maybe, but if I started traveling the country tomorrow, giving a for-profit seminar on how to detect illegal copies of the Super Bowl, and in these seminars, I played previous Super Bowl recordings to the attendees without the NFL’s permission … I think the NFL would have a problem with that.

On Use of Copyrighted Material in Derivative or Transformative Works

The question of whether the output generated by Generative AI models, when not a direct reproduction, counts as copyright infringement is a murky one. There are many examples where courts have determined that “style” is not copyrightable. There are further questions on whether any output created by generative AI based on copyrighted material is derivative or transformative.   Truth be told, it can likely be either, depending on how the model is prompted.  So it’s actually quite difficult to say for sure if the resulting output from generative AI models is fair use or copyright infringement.

We’re left then with questions about who is really violating copyright in any of these cases. Is it the model or the company that owns it? Or Is it the user who prompted the model to generate the content? And does any of it really matter unless that generated content is published?

The Road Ahead

It seems to me the issue of generative AI and copyright has been complicated more than necessary. Generative AI companies must find a way to pay for the content they use to train their models. If they distribute the content, they may need to find a way to pay royalties.  Otherwise, these generative AI companies are profiting off the works of creators without properly compensating them.  And that just isn’t fair.

For artists, don’t let the thought of generative AI copying your style without compensation scare you. These models can’t generate new content and are limited to what they’ve seen in their training set. So, keep making new art, keep pushing boundaries, and if we solve the first problem of content theft and distribution, you’ll continue to be paid for the amazing work you create.

Your Large Language Model – it’s as Dumb as a Rock

© Jason Flaks -Initially generated by DALL-E and edited by Jason Flaks

Unless you’ve been living under a rock lately you likely think we’re entering some sort of AI-pocalypse. The sky is falling and the bots have come calling. There are endless reports of ChatGPT acing college-level exams, becoming self-aware, and even trying to break up people’s marriages! The way  OpenAI and their ChatGPT product have been depicted, it’s a miracle we haven’t all unplugged our devices and shattered our screens. It seems like a sensible way to stop the AI overlords from taking control of our lives.

But never fear! I am here to tell you that large language models (LLMs) and their various compatriots are as dumb as the rocks we all might be tempted to smash them with. Well, ok, they are smart in some ways. But don’t fret—these models are not conscious, sentient, or intelligent at all. Here’s why.

Some Like it Bot: What’s an LLM?

Large Language Models (LLMs) actually do something quite simple. They take a given sequence of words and predict the next likely word to follow. Do that recursively, and add in a little extra noise each time you make a prediction to ensure your results are non-deterministic, and voila! You have yourself a “generative AI” product like ChatGPT.

But what if we take the description of LLMs above and restate it a little more succinctly:

LLMs estimate an unknown word based on extending a known sequence of words.

It may sound fancy—revolutionary, even—but the truth is it’s actually old school. Like, really, really old school—it’s almost the exact definition of extrapolation, a common mathematical technique that’s existed since the time of Archimedes! If you take a step back, Large Language Models are nothing more than a fancy extrapolation algorithm.  Last I checked nobody thinks their standard polynomial extrapolation algorithm is conscious or intelligent. So why exactly do so many believe LLMs are?

Hear Ye, Hear Ye: What’s in an Audio Sample

Sometimes it’s easier to explain a complex topic by comparison. Let’s take a look at one of the most common human languages in existence—music.  Below are a few hundred samples from Bob Dylan’s “Like a Rolling Stone.” 


If I were to take those samples and feed them into an algorithm and then recursively extrapolate out for a few thousand samples, I’d have generated some additional audio content. But there is a lot more information encoded in that generated audio content than just the few thousand samples used to create it.

At the lowest level:

  • Pitch
  • Intensity
  • Timbre

At a higher level:

  • Melody
  • Harmony
  • Rhythm

And at an even higher level:

  • Genre
  • Tempo

So by simply extrapolating samples of audio, we generated all sorts of complex higher-level features of auditory or musical information. But pump the brakes! Did I just create AI Mozart? I don’t think so. It’s more like AI Muzak.

An AI of Many Words: What’s Next? 

It turns out that predicting the next word in a sequence of words will also generate more than just a few lines of text. There’s a lot of information encoded in those lines,  including the structure of how humans speak and write, as well as general information and knowledge we’ve previously logged. Here’s just a small sample of things encoded in a sequence of words:

  • Vocabulary
  • Grammar/Part of Speech (PoS) tagging
  • Coreference resolution (pronoun dereferencing)
  • Named entity detection
  • Text categorization
  • Question and answering
  • Abstract summarization
  • Knowledge base

All of the information above can, in theory, be extracted by simply predicting the next word, much in the same way predicting the next musical sample gives us melody, harmony, rhythm, and more.   And just like our music extrapolation algorithm didn’t produce the next Mozart, ChatGPT isn’t going to create the next Shakespeare (or the next horror movie villain, for that matter).

LLMs: Lacking Little Minds? 

Large Language Models aren’t the harbinger of digital doom, but that doesn’t mean they don’t have some inherent value. As an early adopter of this technology, I know it has a place in this time. It’s integral to the work we do at Xembly, where I’m the co-founder and CTO. However, once you understand that LLMs are just glorified extrapolation algorithms, you gain a better understanding of the limitations of the technology and how best to use it. 

Five Alive: How to Use LLMs So They Don’t Take Over the World


LLMs have huge potential. Just like any other tool, though, in order to extrapolate the most value, you have to use them properly. Here are five areas to consider as you incorporate LLMs into your life and work. 

  • Information must be encoded in text
  • Extrapolation error with distance
  • Must be prompted
  • Limited short-term memory
  • Fixed in time with no long-term memory

Let’s dig a little deeper.

Information Must Be Encoded in Text

Yan LeCun probably said it best:

Humans are multi-modal input devices and many of the things we observe and are aware of that drive our behavior aren’t verbal  (and hence not encoded in text). An example we contend with at Xembly is the prediction of action items from a meeting. It turns out that the statement “I’ll update the row in the spreadsheet” may or may not be a future commitment to do work.  Language is nuanced, influenced by other real-time inputs like body language and hundreds of other human expressions. It’s entirely possible in this example that the task was completed in real-time during the meeting, and the spoken words weren’t an indication of future work at all.

Extrapolation Error with Distance

Like all extrapolation algorithms, the further you get away from your source signal (or prompt in the case of LLMs), the more likely you will experience errors. Sometimes a single prediction that negates an otherwise affirmative statement or an incorrectly assigned gendered pronoun, can cause downstream errors in future predictions. These tiny errors can often lead to convincingly good responses that are factually inaccurate. In some cases, you may find LLMs return highly confident answers that are completely incorrect. These types of errors are referred to as hallucinations.

But both of these examples are really just forms of extrapolation error. The errors will be more pronounced when you make long predictions. This is especially true for content largely unseen by the underlying language model (for example, when trying to do long-form summarization of novel content).

Must Be Prompted

Simply put, if you don’t provide input text an LLM will do nothing. So if you are expecting ChatGPT to act as a sage and give you unsolicited advice, you’ll be waiting a long time. Many of the features Xembly customers rave about are based on our product providing unsolicited guidance. Large Language Models are no help to us here.

Limited Short-Term Memory

LLMs generally only operate on a limited window of text. In the case of ChatGPT, that window is roughly 3000 words. What that means is that new information not already incorporated in the initial LLM training data can very quickly fall out of memory. This is especially problematic for long conversations where new corporate lingo may be introduced at the start of a conversation and never mentioned again. Once whatever buzzword is used falls out of the context window it will no longer contribute to any future prediction, which can be problematic when trying to summarize a conversation.

Fixed in Time with no Long-term Memory

Every conversation you have with ChatGPT only exists for that session. Once you close that browser or exit your current conversation, there is no memory of what was said. That means you cannot depend on new words being understood in future conversations unless you reintroduce them within a new context window. So, if you introduce an LLM to anything it hasn’t heard before in a given session, you may find it uses that word correctly in subsequent responses. But if you enter a new session and have any hopes that the word will be used without introducing it in a new prompt, brace yourself—you will be disappointed.

To Use an LLM or Not to Use an LLM

It’s a big question. LLMs are exceedingly powerful, and you should strongly consider using them as part of your NLP stack. I’ve found the greatest value of many of these LLMs is that they potentially replace all the bespoke language models folks have been making for some time.  You may not need these custom entity modes, intent models, abstract summarization models, etc. It’s quite possible that LLMs can accomplish all of these things at similar or better accuracy, while possibly greatly reducing time to market for products that rely on this type of technology.  

There are many items in the LLM plus column, but if you are hoping to have a thought-provoking intelligent conversation with ChatGPT,  I suggest you walk outside and consult your nearest rock. You just might have a more engaging conversation!

The Annotators Dilemma: When Humans Teach Machines to Fail

What does a machine learning model trained via supervised learning and a lion raised in captivity have in common? … They’re both likely to die in the wild!

Now that might sound like a joke aimed at getting PETA to boycott my blog, but this is no laughing matter. Captive bred lions are more likely to die in the wild and so are machine learning models trained with human annotated data.

According to a 2008 National Geographic Article captive bred predators often die in the wild because they never learn the natural behaviors necessary for success. This is because their human captors never teach the animals the necessary survival skills (e.g., hunting) or inadvertently teach behaviors that are detrimental to their survival (e.g., no fear of humans). It’s not for lack of trying, but for a variety of reasons it is impractical or impossible to expose a captive predator to an environment that completely mirrors their ultimate home.

Why Humans Teach Machines to Fail

Not surprisingly, when humans teach machines, they fail for many of the same reasons. In both supervised and semi-supervised learning machine learning we train a model using human annotated data. Unfortunately, human cognitive and sensory capabilities can introduce a variety of consequences that often lead us to teach machines the wrong thing or fail to fully expose them to the environment they will find the wild. While there are likely many areas that impact the quality of human annotations, I’d like to cover five that I believe have the greatest impact on success.

Missing Fundamental and the Transposed Letter Effect (Priming)

Have you ever had someone explain an interesting factoid that sticks with you for life? One such example in my life is the “Missing Fundamental” effect first introduced to me by my freshman year Music Theory professor. The question he posed was “how are you able to hear the low A on a piano when the vast majority of audio equipment cannot reproduce the corresponding fundamental frequency”. It turns out the low A on a piano has a fundamental frequency of 27.5 HZ and most run of the mill consumer audio equipment is incapable of producing a frequency that low with any measurable gain. Yet we can hear the low A on a Piano recording even with those crappy speakers. The reason why is because of the “Missing Fundamental” effect. In essence the human brain can infer the missing fundamental frequency from upper harmonics.

A similar concept is the “Transposed Letter” effect. I’m sure you’ve seen the meme. You know those images with scrambled letters that tell you you’re a genius if you can read it. Your ability to read those sentences is due to transposed letter effect and related to priming. Basically, even if the letters in words printed on the page are jumbled, reading it can still activate the same region of the brain as the original word.

You might be asking what any of this to do with annotating data and teaching machines. The problem arises when you realize we humans can correctly identify something even when all the data is not actually present to do so. If we annotate data this way we are assuming the machine has the same capabilities and that might not be so. Teaching a machine that “can you raed this” and “can you read this” are the same thing may have unintended consequences.

Selection Bias

If you gave me millions of images and asked me to find and label all the images with tomatoes, I am probably going to quickly scan for anything red and circular. Why? Because that’s my initial vision of a tomato and scanning the images that way would likely speed up the process of going through them all. But that is me imparting my bias of what a tomato looks like into the annotated data set. And it’s exactly how a machine never learns that the Green Zebra and Cherokee Purple varieties are indeed tomatoes!

Multisensory Integration

Humans often make use of multiple senses simultaneously to improve our perception of the world. For example, it has been well documented that speech perception, especially in noisy environments, is greatly enhanced when the listener can leverage visual and auditory cues. However, the vast majority of commercial machine learning models are single modality (reading text, scanning images, scanning audio). So, if my ability to understand a noisy speech signal is only possible due to a corresponding video it may be dangerous to try and teach a machine what was said since the machine likely does not have access to the same raw data.

Response Bias

I am ashamed to admit this but every time I get an election ballot, I feel an almost compulsive need to select a candidate for every position. Even when I have little or no knowledge of the office the candidates are running for, the background of the competing candidates, and their policy positions. Usually, I arbitrarily select a candidate based on their party affiliation or what college they went to, which is probably only slightly better than selecting the first name on the ballot. My need to select a candidate even though I have no basis for doing so is likely a form of response bias. The problem with response bias is it generally leads to inaccurate data. If your annotators suffer from response bias, you are likely teaching the machine with inaccurate data.

Zoning Out

Have you ever driven somewhere only to get to your destination and have no recollection of how you got there? If so, like me, you have experienced zoning out. With repetitive tasks we tend to start with an implied speed versus accuracy metric but over time as the task gets boring, we start to zone out or get distracted but we maintain the same speed which ultimately leads to errors. Annotating data is a highly repetitive task and therefore has a high probability of generating these types of errors. And when we use error-ridden annotated data to teach our machines we likely teach them the wrong thing.

How to be a Better Teacher

While the problems above might seem daunting there are things we can do to help minimize the effects of human behavior on our ability to accurately teach machines.

Provide a Common Context

The missing fundamental and multisensory integration problems are both issues with context. In each of these cases either historical or current context allows us humans to discern something another species (a.k.a. the machine) may not be able to comprehend. The solution to this problem is to make sure humans teach the machine with a shared context. The easiest way to fix this problem is to limit the annotator to the same modality the machine will operate with. If the task at hand is to teach a machine to recognize speech from audio, then don’t provide the annotator access to any associated video content. If the task is to identify sarcasm in written text don’t provide the annotator with audio recordings of the text being spoken. This will ensure the annotator teaches the machine with mutually accessible data.

Beyond tooling you can also train your annotators to try and interpret data from multiple perspectives to ensure their previous experiences don’t cause brain activations that the machine might not benefit from. For example, it is very easy to read text in your head with your own internal inflections that might change the meaning. After all the slightest change in inflection can turn a benign comment into a sarcastic insult. However, if you train annotators to step back and try to read the text with multiple inflections you might avoid this problem.

Introduce Randomness

While it might be tempting to let users search around for items they think will help teach a machine doing so can increase the likelihood of selection bias. There may be good reasons to allow users to search to speed up data collection of certain classes, but it is also important to make sure a sizeable portion of your data is randomly selected. Make sure you set up different jobs for your annotators and ensure some proportion of your labeling effort is from randomly selected examples.

Reduce Cognitive Load

While we may not be able to prevent boredom and zoning out, we can reduce complexity in our labeling tools. By reducing the cognitive load we are more likely to minimize mistakes when people get distracted. Some ways to reduce cognitive load include limiting labeling tasks to single step processes (i.e., only label one thing at a time) and providing clear and concise instructions that remove ambiguity.

Be Unsure

Last but not least, allow people to be unsure. If you force people to put things in one of N buckets they will. By giving people the option of being “unsure” you minimize how often you’ll get inaccurate data due to people’s compulsion to provide an answer even if no correct answer is obvious.

Final Thoughts

No teacher wants to see their students fail. So it’s important to remember whether training a lion or a machine learning model, that different species likely learn in different ways. If we cater our teaching towards our students, we just might find our machine learning models fat and happy long after we sent them off into the wild.

*I’d like to thank Shane Walker and Phil Lipari who inspired this post and have helped me successfully teach many machine learning models to survive in the wild.

Teachers Keep on Teaching – ‘til I Reach my Highest Ground

Forgive me Stevie Wonder for slightly reordering your lyrics, but I think you’d agree that it’s hard to reach your highest ground without the help of teachers. Being a teacher is often a thankless job. Nobody gets rich or famous for being a teacher, yet the contribution teachers make to society is invaluable. So, in honor of Teacher Appreciation Week, I’d like to take the opportunity to thank some of the teachers who helped me reach my highest ground.

The Teachers Who Made Me What I Am Today

Erik Lawrence

http://www.eriklawrencemusic.com/

My saxophone teacher from the age of 9 until I was 18, Erik is largely responsible for my love of music. Seeing that my career has largely centered around music and technology, Erik can largely take credit for planting the seed and nurturing the music branch of that tree. When I began to take an interest in the piano and composing Erik was quick to urge my parents to get me piano lessons. And when my parents wanted to buy me a new saxophone as a graduation present it was Erik who took me to the best New York area music stores to find the perfect sax. Beyond music, Erik was an extremely positive role model throughout my formative years. He taught me to treat others with respect, be accountable for my mistakes, and value my time and the time of others. For all of the above and much more I am grateful for the impact Erik had on my life.

David Snider

https://www.facebook.com/davidsnidermusichttps://www.facebook.com/Snider-Sound-Studio-1450758071810268

By the age of 13, I had taught myself to play piano, I was beginning to compose my own music, and I was slowly collecting a variety of audio electronics. I owned a Yamaha SY55 synthesizer with an onboard sequencer, a Tascam four-track recorder, and boxes of random audio cables. Recognizing my newfound passions Erik Lawrence convinced my parents to get me piano lessons and introduced me to David Snider. If Erik was my music guru then David was my technology guru. Upon realizing my knack for creating music and tinkering with anything music electronics-related, David managed to convince my parents to buy me my first computer (an Apple Mac) and my first audio software package (Mark of the Unicorn’s MOTU – Performer). He took my meandering teenage hobbies and turned them into a focused passion that would ultimately drive a large part of my career. David brought much more than technology to my life. He taught me how to play Jazz piano, something I still do to this day. In fact, 30+ years later I can still play the song Misty exactly the way he taught to me. While David may not remember this, he called me in the early days of my freshman year of Music school to wish me luck and give me some advice on avoiding some of the pitfalls of a musician’s life. It seemed inconsequential at the time but the fact that he cared enough to do that is remarkable.

Eileen M Curley

In the first quarter of my freshman year of high school, I had a failing grade in math. This was not acceptable in the Flaks household, so my mother reached out to the teacher to see if she had any advice on how I could improve my grade. Ms. Curley selflessly gave her own free time to provide me with extra help. It quickly became apparent that my problem with math was unrelated to my aptitude and purely a function of not paying attention and not doing the work. In no time at all, I went from an F to an A. That year I scored a 99 on New York State standardize math exam (the regents) losing only 1 point for carelessly not carrying a negative sign down to my final answer on one question. When Ms. Curley received the results from that exam, she took time out of her day to directly call my house and excitedly tell my Mother how well I did. My time with Ms. Curley was a turning point in my life. Little did she know that I would ultimately go on to be a math major in college, leading to a career in math and computer science.

William (Bill) Garbinsky (a.k.a. Mr. G)

William Garbinsky was a musician first and a high school music teacher second. He loved music and he gave innumerable hours during and after school to help students like me become better musicians. He taught the concert band, wind ensemble, marching band, and jazz band and I was a member of all of them. He gave band nerds like me a place to call home and surrounded us with a like-minded peer group that made us all feel like we were part of something bigger. Mr. G even took time out of his day to teach AP Music History and Music Theory classes to the small cohort of students who were interested. Thanks to those Advanced Placement College Credits I had some free time on my schedule when I entered a college, which I promptly filled with math classes. Sadly Mr. G passed away some years ago but I hope he knows what a difference he made in my life and the lives of countless others.

James (Jim) McElwaine

https://www.linkedin.com/in/jimmcelwaine/https://www.purchase.edu/live/profiles/253-james-mcelwainehttps://qcpages.qc.cuny.edu/music/faculty/james-mcelwaine

I was lucky enough to be accepted into the Conservatory of Music at Purchase College. I was even more fortunate to study under James McElwaine. Professor McElwaine was a Physics student before going full bore into music. So, when he stumbled upon a kid in his music program who was taking calculus classes as electives, he embraced it and pushed me to pursue it further. Beyond encouraging me to explore the math program, Jim recognized my passion for everything audio electronics related, and he opened every door he could, including getting me jobs running live sound for campus events, running the conservatories recording studios and he even got me my first real paid gig as a recording engineer. Professor McElwaine’s willingness to embrace and encourage my odd trajectory through music school played a huge role in my ability to progress into a master’s program that ultimately allowed me to go from using pro-audio equipment to building it.

Martin (Marty) Lewinter

https://www.linkedin.com/in/marty-lewinter-phd-mfa-a87b1b127/https://www.amazon.com/Marty-Lewinter/e/B015MHNC8Chttps://www.purchase.edu/live/profiles/638-martin-lewinter

If my music professor was a physicist then surely, I needed a math professor who was also a musician. Lucky for me the head of the math program Martin Lewinter also happened to be a seasoned musician. Professor Lewinter taught that very first calculus class I took as an elective. After witnessing my interest in math, Marty encouraged me to take on a second degree. Before long I was pursuing two simultaneous bachelor’s degrees with a focus in music composition and math/computer science. Professor Lewinter gave hours of his time towards helping me as the math curriculum progressively got harder and he continued to push me to excel in both the math and the music program. When I started applying to graduate schools with a heavier engineering focus, I picked up some textbooks to independently review. After struggling over some of the math equations I asked Professor Lewinter for some help. I still remember our conversation, where I showed him an equation in a book and he had to explain to me that engineers used j for imaginary numbers, not i, so as not to be confused with the variable for current. It was a simple thing that just might have prevented my first year in graduate school from turning into a complete disaster!

Ken Pohlmann

https://www.linkedin.com/in/ken-pohlmann-246ab92/https://www.soundandvision.com/writer/80474https://www.amazon.com/Ken-C-Pohlmann/e/B001IQUQC2/ref=dp_byline_cont_pop_ebooks_1

In my junior and senior years of college, I started to dive deeper into the underlying math behind the audio tools I was using. I happened to be reading a book called the Principles of Digital Audio and found a note about the author who was a professor of “music engineering” at the University of Miami. Music engineering sounded like an awfully good way to combine four grueling years of math and music education, so I sent Professor Pohlmann an email asking if he’d consider accepting a student without an undergraduate electrical engineering degree. Ken was kind enough to respond, and he recommended I take an extra year to get some basic engineering credits and he pointed me towards some textbooks that might give me an early head start. Well, I did buy the books, but I otherwise ignored him and applied to the program anyway. I still remember being overjoyed at receiving an acceptance letter where Ken told me that he thought my math background would carry me through the curriculum. With Professor Pohlmann and the University of Miami Music Engineering program, I stumbled into a small world of like-minded folks who had a passion for math and music. Professor Pohlman took a hodgepodge of academic pursuits I haphazardly pieced together and combined them into one coherent subject that would ultimately lead to my final career as an engineer, manager, and executive on countless audio projects.

Will Pirkle

https://www.willpirkle.com/https://www.linkedin.com/in/will-pirkle-1b87926/https://people.miami.edu/profile/wpirkle@miami.edu

How many teachers have fed you information that you can directly correlate to your current and future earnings? Not many, but that is exactly what Will Pirkle did for me and many others. Professor Pirkle was able to perfectly blend theory and practice and teach me how to effectively turn everything I had learned into real software that did amazing things with an audio signal. Will took all the ethereal subject matter I had learned over the years and made it into something I could feel and touch. It’s that skill set along with my own willingness to pester anybody for something I want, that led me to my first full-time job with a music software company called Opcode, (ironically a competitor of MOTU) bringing me full circle back to some of my earlier education. Will’s teaching has stood the test of time and I still find a use for some of what he taught. And whenever anybody asks for advice about the audio/music engineering space I regurgitate much of the knowledge Professor Pirkle imparted on me. Without a doubt, I can say that my employability and financial wellbeing are directly tied to everything I learned from Professor Pirkle.

To All the Teachers

While the eight teachers above had the most profound effect on my life there are many other teachers who contributed to my success and I’d like to offer my thanks to all of them. And to all the teachers out there who feel unappreciated, please remember that somewhere out there, in that sea of children, is a kid who just needs a little extra push to find out who they are and be the best version of themselves. Keep fighting for those kids because I am living proof of the impact you can have.

One Final Note of Gratitude

Since Mother’s Day is fast approaching, I’d be remiss if I didn’t thank the greatest teacher of them all, my Mother, Susan Flaks. My mother was there for every step of the journey described in this post. Whether that was teaching me my first notes on the piano, driving me to private music lessons, paying for that first computer, pushing me to get extra help when I needed it, paying for college, or just supporting me through my entire education, she was the root of all my academic and professional success. My mother was more than just an amazing parent, she was also a teacher for more decades than she would care for me to publicly comment on, and I know she had a positive influence on numerous students who like me, went on to be happy, healthy, and well-rounded adults, who have made a positive contribution to their communities and the world.

Voting is Just a Precision and Recall Optimization Problem

It’s hard to avoid the constant bickering about the results of our last election. Should mail-in voting be legal? Do we need stricter voter identification laws? Was there fraud in the last election? Did it impact the results? These are just a fraction of the questions circulating around elections and voter integrity these days. Sadly, these questions appear to be highly politicized and it’s unclear if anybody is really interested in asking what an optimal election system looks like.

In a true fair and accurate representative democracy, a vote not counted is just as costly as one inaccurately counted. More precisely, a single mother with no childcare who doesn’t vote because of 4-hour lines is just as damaging to the system as a vote for a republican candidate that is intentionally or accidentally recorded for the opposing Democratic candidate.

Therefore, we can conclude an optimal election system really involves optimizing on both axes. How do we make sure everyone who wants to vote gets to vote? And how do we ensure every vote is counted accurately? When viewed this way one can’t help but see the parallels to optimizing a machine learning classifier for precision (when we count votes for a given candidate how often did we get it right) and recall (of all possible votes for that candidate how many did we find).

Back the Truck Up! What is Precision and Recall Anyway

Precision and Recall are two metrics often used to measure the accuracy of a classifier. You might ask “why not just measure accuracy?” and that would be a valid question. Accuracy defined as everything we classified correctly divided by everything we evaluated, suffers from what is commonly known as the imbalanced class problem.

Suppose we have a classifier (a.k.a. laws and regulations) that can take a known set of voters who intend to vote “democrat” and “not democrat” (actual / input) and then outputs the recorded vote (predicted / output).

Let’s assume we evaluate 100 intended voters/votes, 97 of which intend to not vote for the democratic candidate and let’s build the dumbest classifier ever known. We are just going to count every vote as “not democrat”, regardless of whether the ballot was marked for the democratic candidate or not.

N (number of votes) = 100 Output (Predicted) Value
Democrat Not a Democrat
Input (Actual) Value Democrat TP = 0 FN = 3 TOTAL DEMOCRATS = 3
Not a Democrat FP = 0 TN = 97 TOTAL NOT DEMOCRATS = 97
POSITIVES = 0 NEGATIVES = 100

To make our calculations a little easier we can take those numbers and drop them into a table that compares inputs to outputs also known as a confusion matrix. To simplify some of our future calculations we can further define some of the cells in the table above

  • True Positives (TP): Correctly captured an intended vote for the democrats as a vote for the democrats (97)
  • True Negatives (TN): Correctly captured a vote NOT intended for the democrats as a vote, not for the democrats (97)
  • False Positives (FP): Incorrectly captured a vote NOT intended for the democrats as a vote for the democrats (0)
  • False Negatives (FN): Incorrectly captured an intended vote for the democrats as a vote not for the democrats (3)

Now we can slightly relabel our accuracy equation and calculate our accuracy with our naïve classifier and the associated values from the table above.

97% Accuracy! We just created the world’s stupidest classifier and achieved 97% accuracy! And therein lies the rub. The second I expose this classifier to the real world with a more balanced set of inputs across classes we will quickly see our accuracy plummet. Hence, we need a better set of metrics. Ladies and gentlemen, I am delighted to introduce …

  • Precision: Of the votes recorded (predicted) for the Democrats, how many were correct

  • Recall: Of all possible votes for the Democrats, how many did we find

What becomes blatantly clear from evaluating these two metrics is that our classifier, which appeared to have great accuracy, is terrible. None of the intended votes for the democrats were correctly captured and of all possible intended votes for the democrats, we found none of them. It’s worth noting that the example I’ve presented here is for a binary classifier (democrat, not democrat) but these metrics can easily be adapted to multi-class systems that more accurately reflect our actual candidate choices in the United States.

There’s No Such Thing as 100% Precision and Recall

Gödel’s incompleteness theorem, which loosely states that every non-trivial formal system is either incomplete or inconsistent, likely applies to machine learning and artificial intelligence systems. In other words, since machine learning algorithms are built around our known formal mathematical systems there will be some truths they can never describe. A consequence of that belief and something I preach to everyone I work with is that there is really no such thing as 100% precision and recall. No matter how great your model is and what your test metrics tell you. There will always be edge cases.

So if 100% precision and recall is all but impossible what do we do? When developing products around machine learning classifiers, we often ask ourselves what is most important to the customer, precision, recall, or both. For example, if I create a facial recognition system that notifies the police if you are a wanted criminal, we probably want to air on the side of precision because arresting innocent individuals would be intolerable. But in other cases, like flagging inappropriate images on a social network for human review, we might want to air on the side of recall, so we capture most if not all images and allow humans to further refine the set.

It turns out that very often precision and recall can be traded off. Most classifiers emit a confidence score of sorts (also known as a SoftMax output) and by just varying the threshold on that output we can trade-off precision for recall and vice-vera. Another way to think about this is, if I require my classifier to be very confident in its output before I accept the result, I can tip the results in favor of precision. If I loosen my confidence threshold, I can tip it back in favor of recall.

And how might this apply in voting? Well, if I structure my laws and regulations such that every voter must vote in person with 6 forms of ID and the vote is tallied in front of the voter by a 10-person bipartisan evaluation team who must all agree … we will likely have very high precision. After all, we’ve greatly increased the confidence in the vote outcome. But at what expense? We will also likely slow down the voting process and create massive lines which will significantly increase the number of people who might have intended to vote but don’t actually do so, hence decreasing recall.

Remind me again what the hell this has to do with Voting

The conservative-leaning Heritage Foundation makes the following statement on their website:

“It is incumbent upon state governments to safeguard the electoral process and ensure that every voter’s right to cast a ballot is protected.”

I strongly subscribe to that statement and I believe it is critical to the success of any representative democracy. But ensuring that every voter’s right to cast a ballot is protected, requires not only that we accurately record the captured votes, but also ensure that every voter who intends to vote is unhindered in doing so.

Maybe we need to move entirely to in-person voting while simultaneously allocating sufficient funds for more polling stations, government-mandated paid time off, and government-provided childcare. Or maybe we need all mail-in ballots but some new process or technology to ensure the accuracy of the votes. Ultimately, I don’t pretend to know the right answer, or if we even have a problem, to begin with. What I do know is that if we wish to improve our election systems we must first start with data on where we stand today and then tweak our laws and regulations to simultaneously optimize for precision and recall.

So, the next time a politician proposes changes to our election system ask … no demand, they provide data on the current system and how their proposed changes will impact precision and recall. Because only when we optimize for both these metrics can we stop worrying about making America great again and start working on making America even greater!

“If you start me up I’ll never stop …” Until We Successfully Exit

“Hey, our fledgling startup is on path to being the next *INSERT BIG TECH COMPANY NAME HERE* and we think you’re a great fit for our CTO role”. Find me a technical leader who hasn’t been enticed by those words and you’ll have found a liar. So, what happens when one succumbs to the temptation and joins an early-stage startup? Well, if you have been wondering where I’ve been for the past couple of years, I was fighting the good fight at a small, early-stage NLP/Machine Learning based risk intelligence startup. And while I’m not retired or sailing around the world in my new 500-foot yacht, we were able to successfully exit the company with a net positive outcome for all involved. My hope with this post is that I can share some of my acquired wisdom, and perhaps steer the next willing victim down a similar path of success.


If I could sum up my key learnings in a few bullet points, it would boil down to this:

  • If you don’t believe … don’t join
  • Be prepared to contribute in any way possible
  • Find the product and focus on building it
  • Pick the race you have enough fuel for and win it


What I’d like to do in the rest of this post is break down each one of these items a little further.

If you don’t believe … don’t join


Maybe this goes without saying, but if you don’t believe in the vision, the people, and the product you shouldn’t join the startup approaching you. The CTO title is alluring, and it is easy to fool yourself into taking a job for the wrong reasons. But the startup experience is an emotional slog of ups and downs and it will be nearly impossible to weather the ride if you don’t wake up every day with an unyielding conviction for what you’re doing. As I’ll explain later in this post, you don’t need to believe you’re working for the next Facebook, but you do need to believe you are building a compelling product that has real value for you, your coworkers, your investors, and your customers.

Be prepared to contribute in any way possible


For the first few months on the job I used to go into our tiny office and empty all the trash bins because, if I didn’t, that small office with 5 engineers started to smell! It didn’t take long for someone to call out that I was appropriately titled, CTO (a.k.a. Chief Trash Officer). You might be asking why anybody would take a CTO job to wind up being the corporate custodian, but that is what was needed on some days.


While I have steadfastly maintained my technical chops throughout my career, I hadn’t really written a lick of production code for nearly two decades prior to this job. But with limited resources, it became clear I also needed to contribute to the code base and so I dusted off those deeply buried skills and contributed where I could. When you join a startup with that CTO title, it is easy to convince yourself that you’ll build a huge team, be swimming in resources, and have an opportunity to direct the band versus playing in it. But you’ll quickly find that in the early stages of a startup, the success of the company will depend on your willingness to drop your ego and contribute wherever you can.

Find the product and focus on building it


Great salespeople can sell you the Brooklyn Bridge. And if you’re just lucky enough, you might have a George C. Parker in your ranks. But the problem with great salespeople is they will do almost anything to close the sale and that comes with a real risk that they’ll sell custom work. If that happens over an extended period of time, you will be unable to focus on the core product offering and you’ll quickly find you’re the CTO of a work-for-hire / consulting company.


Startups face real financial pressures that often drive counterproductive behaviors. That often means doing anything necessary to drive growth in revenue, customers, or usage. But as illustrated in the graph below, high product variance will often ultimately lead to stagnant growth.

That’s because with every new feature comes a perpetual support cost. And if you keep building one-off features, and can’t fundraise fast enough, that cost will eventually come at the expense of delivering your true market-wide value proposition. If you allow this to happen, you’ll wind up with a company that generates some amount of revenue or usage but has no real value.


Companies that find true product/market fit should see product variance gradually decrease over time and this should allow the company to grow. Your growth trajectory may be linear when you need it to be exponential, but no per customer feature work will fix that problem and you may need to consider pivoting. If pivoting isn’t an option, it may be time to look for an exit.

As the CTO, a critical part of your job is to help the company find its product/market fit and then relentlessly focus on it. You need to hold the line against distractions and ensure the vast majority of resources are spent on features that align with the core value proposition. If you’ve truly found a product offering that is valued by a given market segment, and you can keep your resources focused on building it, growth will follow.

Pick the race you have enough fuel for and win it

I am an avid runner, and one of the great lessons of long-distance running is, that if you deplete your glycogen store, you’ll be unable to finish the race no matter how hard you trained. In other words, you can’t win the race if you have insufficient fuel. This is also very true of startups. If you’re SpaceX or Magic Leap, you’re running an ultra-marathon and you need a tremendous amount of capital in order to have sufficient time and resources to realize the value. But fundraising is hard, and even if you have an amazing product and top-notch talent, there can be significant barriers to acquiring sufficient capital.


The mistake some startups make is that they continue to run an ultra-marathon when they only have fuel for a 5k and that can lead to a premature or unnecessary failure. If funding becomes an issue, start looking for how your product might offer value to another firm. Start allocating resources towards making the product attractive for an acquisition. Aim to win a smaller race and seek more fuel on the next go around.

Final Thoughts


Taking on a CTO role at an early stage startup can be a great opportunity and lead to enormous success, but before you take the leap make sure you know what you’re getting into. Along the way don’t forget to stop and smell the roses. In the words of fellow Seattle native Macklemore, “Someday soon, your whole life’s gonna change. You’ll miss the magic of these good old days”.

Final Final Thoughts


No startup CTO is successful without support from an army of people. So I’d like to offer some gratitude to the following folks:

  • Greg Adams, Christ Hurst: Thanks for giving me an opportunity and treating me like a cofounder from day one.
  • Shane Walker, Cody Jones, Phil LiPari, Pavel Khlustikov, David Ulrich, Julie Bauer, Jason Scott, Carrie Birmingham, Rich Gridlestone, Bill Rick, Zach Pryde, Amy Well, David Debusk, Mikhail Zaydman, Jean-Roux, Bezuidenhout, Sergey Kurilkn (and others I may have forgotten): Thanks for being one the greatest teams I’ve ever worked with.
  • Brandon Shelton, Linda Fingerle, Wayne Boulais, Armando Pauker, Matt Abrams, Matthew Mills: Thank you for being outstanding board members, mentors, and investors
  • Ziad Ismail, Pete Christothoulou, Kirby Winfield: Thank you for the career advice during my first venture into the startup world.

*Note: You can read more about Stabilitas, OnSolve, and our acquisition at the links below:

https://www.onsolve.com/solutions/products/stabilitas/

https://www.geekwire.com/2020/seattle-based-threat-response-startup-stabilitas-acquired-onsolve/

Pair Programming or Bare(ly) Programming

image

“Sorry we don’t have enough resources, we only have four pairs” – As an engineering leader no other statement has made me cringe more.  After all four pairs is a healthy sized team of eight developers. 

Throughout my career I have run across CTOs, VPs, directors, development managers, teams, and individual developers who swear by pair programming with near religious devotion.   Personally I’ve maintained a healthy dose of skepticism when it comes to pairing as an overarching development philosophy.  

As an engineering leader my job is to build products that delight customers in the most efficient way possible.   Anecdotally, pairing consistently costs more and hence seems irresponsible to use exclusively as a development technique.    But admittedly anecdotal evidence is insufficient so I decided to dig through the research and see if I could find more empirical evidence to support my claim.

Background

Pair programming is an agile software development methodology where two programmers work on the same task using one computer and keyboard.   One programmer is called the driver and operates the keyboard and does the primary coding work.   The other developer, often called the navigator, is responsible for observing the driver and providing guidance in order to speed up problem solving, improve design, and minimize defects.

The potential negative impact of pair programming is immediately clear to most people.   By applying two resources to a task you are effectively doubling the cost.  So unless there’s an equal or greater improvement in other project variables, pair programming would be nearly impossible to justify. Exploring the problem through a project management lens, where we have three variables, cost (including resources), time, and quality/scope, If we double our cost we’d expect to see an equivalent decrease in time to deliver or increase in quality or scope (or some factor of each).

image

In mathematical terms let’s assume the value of any given project X is equal to a weighted linear combination of cost, time and quality/scope. 

image

When pairing our cost is automatically going to double since we’ve applied two resources for a task that in theory can be completed by one.

image

In order for our project value to remain equal or be better we need our other variables to proportionally change in the right direction.   For example if our project now takes 50% less time we could argue we net out even.  Or if our scope or quality double, we would similarly be in a good position.

image

However, In my experience I’ve not seen pair programming live up to these expectations.  Instead I’ve seen tasks or user stories take the same amount of time and produce similar results at nearly double the cost.  But you shouldn’t take my word for it.  Let’s review the literature and see what the experts have to say.

Research

There are actually a fair number of research papers that attempt to prove or disprove the efficacy of pair programming.  That said, in my survey of the literature I found most of the research to be ill designed for comparison to real world corporate product development organizations.  Specific issues include:

  • Developer Skills:  Most of the studies rely on university students that shouldn’t be compared to seasoned professional developers.
  • Non Production Environments:  The majority of the software used for evaluation is very far removed from real product development environments.
  • Organization Realities: Finally there is little or no accounting for organizational churn that happens in a real for-profit company
  • In spite of these issues it’s worth exploring these various research studies and the insights they provide on the impacts of pair programming.

    Many of the research papers evaluate the impact of pair programming on effort, which in at least one paper is defined as two times the duration or time required to complete a given task [1].  Specifically, effort increases ranging from 15% all the way to 100% have been observed [2].  In one of the more well conducted studies an effort increase of 84% was seen [1].   Since we know effort is just twice the duration of a single developer we can actually do some math to figure out how much faster pairs complete a task versus a single developer.

    image

    Or by using our earlier project management equation, with a little rounding we can assume our pairing time weight would be roughly 9/10 the weight required for a single developer.

    image

    This is nowhere near the factor of 1/2 or less we said we needed to make pair programming cost efficient.  Well if the research doesn’t support a sufficient decrease in time to completion perhaps there’s research indicating that a given project’s scope or quality will increase enough to offset the difference.  

    Unfortunately, once again the results are at best inconclusive, but in many cases support an actual decrease in scope and minimal or near zero increase in quality.  For example in [2] a reported 29% decrease in productivity was measured for pair programming team when measured as a function of completed use cases.

    Regarding quality, even in one of the more optimistic papers we only saw a 10% – 20% increase in quality (measured as test cases passed) [3].   According to [2], we only saw an 8% improvement in quality when measuring actual defects.   While these improvements are non trivial, when combined with the time and scope metrics it remains insufficient to offset the associated costs.

    Cherry Picking

    “But aren’t you just cherry picking the worst examples to justify your case” you might ask? Not really because even in the most optimistic research studies initial results were usually much worse and only improved over time.  For example in [3] initial increases in effort dropped from 60% to15% over time. Most of the research attributes these gains in effort to “pair jelling”.  In other words, as the pairs get to known each other they become more efficient.

    The problem with these studies is that they assume that once a pair jells the gain will hold.  However in any real for-profit organization there is potential for high variability in projects and staff which means pair jelling is unlikely to be a one off cost.  It is more likely a continuing cost to the business over time.

    Several studies also point out that the value of pair programming decreases with simpler tasks [4].   Therefor one must consider the ratio of simple to complex tasks in any given development cycle in order to understand the long term impacts of pair programming.  When I evaluated my own teams, I found multiple iterations where 75% of work items where smaller changes that could easily be tackled by a single developer in the same timeframe.  

    Finally, one paper [5] attempted to justify pair programming by evaluating Net Present Value (NPV).   In this paper an argument is made that even if it costs more to pair program, faster time to market warrants the cost.  I take issue with this calculation since it does not factor in the opportunity cost of having those extra resources not work on a different higher priority project.  

    For example if we take the reported 84% increase in effort and assume we finish our project in 9/10 the time of a single developer, we must ask ourselves what happens when a key customer asks for a critical bug fix?   I can tell that customer to wait until I finish my current project or I can split my pair and work on both at the same time at the small cost of a 1/10 increase in duration.  By splitting my pair I’ve delighted my key customer as quickly as possible at a trivial cost. Clearly you need to factor in the opportunity cost of not delighting that customer when evaluating the value of pair programming.

    To Pair or Not to Pair

    So should you pair or not pair?  There are a lot of reasons a team might use pair programming.  In some cases the cost / benefit tradeoff may be worthwhile.  Pairing can be very effective at educating new team members, improving the skills of junior team members, cross training, and reducing the cost of complex tasks.  If you take anything away from this post let it be:

  • Challenge the Efficacy of Pair Programming: If your team or engineering manager wants to exclusively use pair programming, don’t blindly accept it.  Collect the data to validate if it is really cost effective
  • Pair when it makes Sense:  Use pairing selectively when it makes sense including educating new team members, improving the skills of junior team members, cross training, and reducing the cost of complex tasks.
  • Factor in Opportunity Costs: Make sure you consider the opportunity costs of projects not being worked on when pairing.
  • In short don’t allow yourself to be swayed by a dogmatic insistence that pair programming is better.  As a leader your job is to challenge your team to delight customers in the most cost effective way possible.   Pairing should only be used if it definitively contributes to that cause.

    References

    [1] Arisholm, Erik, et al. “Evaluating pair programming with respect to system complexity and programmer expertise.” IEEE Transactions on Software Engineering 33.2 (2007). – Summary available at https://pdfs.semanticscholar.org/9787/c9663cad3a1c21550f2e5e365e70fd01d3aa.pdf

    [2] Vanhanen, Jari, and Casper Lassenius. “Effects of pair programming at the development team level: an experiment.” Empirical Software Engineering, 2005. 2005 International Symposium on. IEEE, 2005. https://pdfs.semanticscholar.org/40dd/fa666bf367cfffaae421dbd3c6170a3e3dc3.pdf

    [3] Cockburn, Alistair, and Laurie Williams. “The costs and benefits of pair programming.” Extreme programming examined (2000): 223-247. http://www.cs.pomona.edu/~markk/cs121.f07/supp/williams_prpgm.pdf

    [4] Lui, Kim, and Keith Chan. “When does a pair outperform two individuals?.” Extreme programming and agile processes in software engineering (2003): 1011-1011. ftp://nozdr.ru/biblio/kolxo3/Cs/CsLn/E/Extreme%20Programming%20and%20Agile%20Processes%20in%20Software%20Engineering,%204%20conf.,%20XP%202003(LNCS2675,%20Springer,%202003)(ISBN%203540402152)(479s)_CsLn_.pdf#page=240

    [5] Padberg, Frank, and Matthias M. Muller. “Analyzing the cost and benefit of pair programming.” Software Metrics Symposium, 2003. Proceedings. Ninth International. IEEE, 2003. http://wwwipd.ira.uka.de/Tichy/uploads/publikationen/32/metrics03.pdf

    End-to-End Speech Recognition: Part 1 – Neural Networks for Executives (I Mean Dummies)

    When I originally contemplated the subject of my next blog post, I thought it might be interesting to provide a thorough explanation of the latest and greatest speech recognition algorithms, often referred to as End-to-End Speech Recognition, Deep Speech, or Connectionist Temporal Classification (CTC).   However, as I began to research the topic I quickly discovered that my basic knowledge of neural networks was woefully lacking.  Several weeks of reading and a few hundred lines of code later, I realized before I could teach a fellow plebe like myself about end-to-end speech recognition,  I probably needed to introduce the fundamentals first.

    With that in mind, what was intended to be a single entry will likely turn into multiple blog posts covering an overview of end-to-end speech recognition and some fundamentals of deep learning that make it possible.  In this first post I’d like to provide a brief introduction to end-to-end speech recognition and then give a more detailed tutorial about one of the basic components of deep learning, a multilayer perceptron, also known as a feed forward neural network.  I’ll then walk you through how I brought all this information together while building a very basic end-to-end speech recognition system.

    End-to-End Speech Recognition

    So what is end-to-end speech recognition anyway?  At it’s most basic level an end-to-end speech recognition solution aims to train a machine to convert speech to text by directly piping raw audio input with associated labeled text through a deep learning algorithm.   The resulting model is then able to recognize speech with no further algorithmic components.

    image

    And why is this any better than traditional speech recognition systems?  Traditional speech recognition systems use a much more complicated architecture that includes feature generation, acoustic modeling, language modeling, and a variety of other algorithmic techniques in order to be accurate and effective.   This in turn makes the training, testing, and code complexity far more difficult than would be with an end-to-end system.

    image

    In other words an end-to-end solution greatly reduces the complexity in building a speech recognition system.   And if that alone doesn’t convince you of the value an end-to-end recognizer brings to the table, several research teams, most notably the folks at Baidu, have shown that they can achieve superior accuracy results over traditional speech recognition systems.

    To validate the possibilities of an end-to-end speech recognition system I decided to build my own.  However, I quickly found that building such a system required advanced knowledge of deep learning techniques.   This is because the current end-to-end systems generally rely on more complex neural network algorithms like Recurrent Neural Networks (RNNs) and something called the connectionist temporal loss function that are difficult to understand if you don’t have a solid understanding of basic neural networks.   So I opted to take a simpler approach and see if I could build a very simple end-to-end recognizer using basic deep learning techniques.   Specifically a feed forward neural network or multi layer perceptron.

    Neural Network Fundamentals

    Before I dive into the details, let me provide a quick tutorial on the feed forward neural network.  The underlying element of a neural network is called a perceptron or an artificial neuron.  Much like a biological neuron, a perceptron takes a series of inputs, performs a function on those inputs, and produces and output that can be passed to other neurons.

    image

    The simplest function is just a sum of weighted inputs.  However this function is a linear relationship and the world is rarely linear so we apply something called an activation function to help impart nonlinearity.   There are actually numerous activation functions used in neural networks, some linear and some not, but the Sigmoid and TanH functions are two you will commonly see in the relevant literature.

    image

    Now that we know what a neuron is, a neural network is really just a collection of multiple interconnected neurons.   Neurons are grouped and connected in “layers”.   The simplest neural network is a single layer network that connects one or more inputs to one or more outputs.   There is no calculation on the input layer, only the output layer.

    image

    Neural networks can grow in complexity by adding additional layers which are commonly referred to as “hidden layers”.  In theory a network can contain an infinite number of layers with an infinite number of neurons although this is neither practical or necessary.

    image

    The only remaining question then is how do we know what weights will give us the outputs we are looking for.  A simple feed forward neural network uses a technique called forward and back propagation to train the network and find the optimal weights.   There are dozens of books and blog posts devoted to the subject of how the forward and back propagation algorithms work, but for the sake of this blog post I’ll provide an introductory explanation along with pointers to additional information.

    The main idea requires randomly initializing our weights and pushing the inputs “forward” through the network so we can make an output prediction.   We then use a cost or loss function to calculate how far our prediction was from the expected result.

    Our ultimate goals is to reduce our error or cost to the lowest point possible (sometimes referred to as the global minimum).  To do this we use an algorithm called gradient decent.   The goal of the gradient descent algorithm is to find the partial derivative of the cost function with respect to each weight.

    image

    In other words we’re looking for the direction (+/-) and slope of our cost function to tell us how large to adjust our weights and in which direction in order to get to zero cost (or close to it).  If the gradient is 0 we have reached our minima.   While I won’t go into the details thanks to the concept of the chain rule in calculus we can actually start at the output layer , perform the gradient descent algorithm, and “back” propagate it to the next layer and all the way back to our inputs.  Along the way we are calculating how much we need to adjust our weights to get closer to that zero cost.

    image

    When training a neural network we continue to forward and back propagate until we we have minimized the error.  While I have grossly oversimplified the explanation for forward and back propagation, this is fundamentally how neural networks work.  I have provided links to more detailed descriptions at the end of this post.

    Putting it All Together

    Now that we have some basic knowledge of end-to-end speech recognition systems and neural networks, we’re ready to make a simple end-to-end speech recognizer.  To build this recognizer I used python and the numpy library to help with the matrix math.

    However, before we start we need a simple speech data set.  Preferably one consisting of utterances with only single words.  This would eliminate the need to deal with time alignment (i.e. which text goes with which audio segment in time).  Luckily I found a great freely available dataset consisting of people speaking single digits 0 – 9 with fifty utterances per digit per person.   This data set met the criteria of being a single word while also being sufficiently large enough to train a neural network.

    With labeled audio data in hand the next step required is reading in the audio data and the associated labels  For this I used the python librosa library.  Librosa provides easy to use out-of-the-box functions for computing the Short Time Fourier Transform (STFT) which is necessary to get the frequency spectrum of our audio signal (e.g. our input signal).  Librosa additionally provides handy functions for computing other audio features like Mel Frequency Cepstral Coefficients (MFCC) which can also be a useful audio input feature (note my code provides an alternative implementation that uses MFCC’s instead of the raw spectrum)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    
    for files in file_list:
        relative_path = 'recordings/' + files[0]
        file_name = os.path.join(os.path.dirname(__file__), relative_path)
        y, sr = load(file_name, sr=None)
        filesize = sys.getsizeof(y)
     
        if output_type == 'spectrum':
            spectrum = stft(y, nfft, hop_length=int(filesize / 2))
            mag, phase = magphase(spectrum)
            mag_input.append(mag)
     
        mfcc = feature.mfcc(y, sr, n_mfcc=nmfcc, hop_length=int(filesize / 2))
        mfcc = mfcc[1:nmfcc]
        mfcc_input.append(mfcc)
     
        digit.append(files[0][0])

    Beyond the audio, we also need to store the associated digit spoken in each audio file.   When training a multiclass classifier ( in our case our classes are 0 – 9) it’s common to use something called “one hot” vectors to represent the output.   This is just a vector where all the classes are represented by 0 except for the one element representing the actual output class.   So in our case we have a 10 element vector and if the audio file is someone saying “one’ the vector would look like [0 1 0 0 0 0 0 0 0 0 ].

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
    class digits:
        zero    = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        one     = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
        two     = [0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
        three   = [0, 0, 0, 1, 0, 0, 0, 0, 0, 0]
        four    = [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
        five    = [0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
        six     = [0, 0, 0, 0, 0, 0, 1, 0, 0, 0]
        seven   = [0, 0, 0, 0, 0, 0, 0, 1, 0, 0]
        eight   = [0, 0, 0, 0, 0, 0, 0, 0, 1, 0]
        nine    = [0, 0, 0, 0, 0, 0, 0, 0, 0, 1]

    With our inputs and outputs squared away it’s time to define our network. The variables that make up your network are also known as hyper-parameters. For my end-to-end recognizer I selected the following hyper-parameters: (*Note that selecting hyper-parameters is half art and half science and your choices will be critical to the success of your network.  I have provided additional resources below)

    • Number of layers:3 (input, output and one hidden layer)
    • Nodes in hidden layer: 2048 (1x our frequency bins)
    • Activation functions: TanH (HIdden), Sigmoid (Output)
    • Weight initialization algorithm: Xavier or Glorot
    • Learning rate = 0.001
    • Rate decay = 0.0001
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    
    input_layer = layers.Layer(inputs=training_inputs.shape[0], neurons=training_inputs.shape[1] + 1)
     
    if mode == 'E2E':
        hidden_layer = layers.Layer(inputs=training_inputs.shape[1] + 1, neurons=2048,
                                    activation=activationfunctions.Tanh_Activation,
                                    activation_derivative=activationfunctions.Tanh_Activation_Deriv)
        hidden_layer.Initialize_Synaptic_Weights()
     
        output_layer = layers.Layer(inputs=2048, neurons=training_outputs.shape[1],
                                    activation=activationfunctions.Sigmoid_Activation,
                                    activation_derivative=activationfunctions.Sigmoid_Activation_Derivative)
        output_layer.Initialize_Synaptic_Weights()if mode == 'E2E':

    nnet = NeuralNetwork(layer1=input_layer, layer2=hidden_layer, layer3=output_layer, learning_rate=0.001,
    learning_rate_decay=0.0001, momentum=0.5)

    So now that we have our inputs and outputs, and we’ve defined our network, all we need to do is train using our forward and back propagation functions. Per my earlier description the forward propagation algorithm is quite simple and is really just summing the weighted inputs and applying the activation functions. Using matrix math this can be written in three or four simple lines of code.

    1
    2
    3
    4
    5
    
    def Feed_Forward(self, inputs):
        self.l1_inputs[:,0:self.layer1.neurons-1] = inputs
        self.l2_hidden = self.layer2.activation(dot(self.l1_inputs, self.layer2.synaptic_weights))
        self.l3_output = self.layer3.activation(dot(self.l2_hidden, self.layer3.synaptic_weights))
        return  self.l3_output

    The forward propagation algorithm gives us our predicted output.  Using that predicted output we can perform our back propagation.  Much like my earlier explanation we need to perform a series of steps for each layer.   Specifically we need to calculate the error, calculate the gradient, and adjust our weights based on the previous two calculations.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    
    def Back_Propogate(self, outputs):
     
        output_deltas = numpy.zeros((self.layer1.inputs, self.layer3.neurons))
        l3_output_error = -(outputs - self.l3_output)
        if self.layer3.activation_derivative == activationfunctions.Sigmoid_Activation_Derivative:
            output_deltas = self.layer3.activation_derivative(self.l3_output) * l3_output_error
        elif self.layer3.activation_derivative == activationfunctions.softmax_derivative:
            output_deltas = l3_output_error
        elif self.layer3.activation_derivative == activationfunctions.Oland_Et_Al_Derivative:
            output_deltas = self.layer3.activation_derivative(self.l3_output) - outputs
     
        hidden_deltas = numpy.zeros((self.layer1.inputs, self.layer2.neurons))
        l2_hidden_error = output_deltas.dot(self.layer3.synaptic_weights.T)
        hidden_deltas = self.layer2.activation_derivative(self.l2_hidden) * l2_hidden_error
     
        adjustment1 = self.l2_hidden.T.dot(output_deltas)
        self.layer3.synaptic_weights = self.layer3.synaptic_weights - (adjustment1 * self.learning_rate) #+ self.l3_output_adjustment * self.momentum
        self.l3_output_adjustment = adjustment1
     
        adjustment2 = self.l1_inputs.T.dot(hidden_deltas)
        self.layer2.synaptic_weights = self.layer2.synaptic_weights - (adjustment2 * self.learning_rate) #+ self.l2_hidden_adjustment * self.momentum
        self.l2_hidden_adjustment = adjustment2

    To bring it all together we just need to iterate over our forward and back propagation algorithms until we have stopped learning or have reduced our cost or error to it’s lowest possible point.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    
    def Train(self, inputs, outputs, iterations):
        for iteration in range(iterations):
            error = 0.0
     
            # random.shuffle(patterns)
            # turn off random
            randomize = numpy.arange(len(inputs))
            numpy.random.shuffle(randomize)
            inputs = inputs[randomize]
            outputs = outputs[randomize]
     
            self.Feed_Forward(inputs)
            error = self.Back_Propogate(outputs)
            error = numpy.average(error)
            if iteration % 10 == 0:
                print('error %-.5f' % error)
            # learning rate decay
            self.learning_rate = self.learning_rate * (
            self.learning_rate / (self.learning_rate + (self.learning_rate * self.learning_rate_decay)))

    That’s it!  While there is a lot more glue code and learning that went into this implementation what I have presented here represents the fundamental building blocks of a basic end-to-end speech recognition system.  I have made the full project available on GitHub and you can evaluate the code yourself in order to fully comprehend all the details.  I’ve also provided a bevy of resources below that helped get me to this point and can do the same for you.

    Final Thoughts

    You might be asking why a senior leader in my position would spend the time required to go through this exercise.  There are some general principles I like to follow and I think anybody managing a research oriented (or really any engineering) team should consider as well.  Specifically:

    • ABL – Always Be Learning:  If you want to innovate you need to be up to speed on the latest technology trends.
    • Earn your team’s respect:  The best way to earn the respect of your technical team is to get into the trenches.  Show them that you understand their job and all the pain that comes with it.  In other words write code (any code), test it, check it in, and push it to production.
    • Lead by example: If you want your team to “innovate for the masses”, it’s best demonstrate the behaviors you are looking for.

    Hopefully this post has given you a basic understanding of end-to-end speech recognition systems and neural networks  If you’re really brave perhaps you’ve learned how to build your own simple end-to-end recognizer.  But if you take nothing else away from this article I hope it’s that you’ll invest your time improving your own technical skills and getting in the trenches to earn your team’s respect.

    In an upcoming post I’ll dig deeper into end-to-end speech recognition algorithms and how they work.  Specifically we’ll cover recurrent neural networks and the connectionist temporal classification algorithms that truly allow these systems to be superior over traditional speech recognition systems.  In the mean time I hope you get a chance to “wreck a nice beach”!

    References
    1. “How to build a simple neural network in 9 lines of Python code” – Milo Spencer-Harper
    2. “How to build a multi-layered neural network in Python” – Milo Spencer-Harper
    3. “Understanding and coding Neural Networks from Scratch in Python and R” – Sunil Ray
    4. “How to Compute the Derivative of  Sigmoid Function (fully worked example)” – Jeremy (no last name)
    5. “Practical recommendation for Gradient-Based Training of Deep Architectures” – Yoshua Bengio
    6. “How to train your Deep Neural Network” – Rishabh Shukla
    7. “Understanding the difficulty of training deep feedforward neural networks” – Xavier Glorot and Yoshua Bengio
    8. “Deep Learning Basics: Neural Networks, Backpropegation, and stochastic Gradient Descent” –  Alex Minnaar
    9. “Speech Recognition: You down with CTC” – Karl N.
    10. “Deep Speech: Scaling up end-to-end speech recognition” – Andrew Y. Ng et al.
    11.  “Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks” – Alex Graves et al.