On Intuition: The Neuroscience of Affect and the Economics of Artificial Intelligence

Cameron Lutz
24 min readDec 23, 2021
Source: Angus McBride

This essay seeks to reveal how artificial intelligence is functionally similar to how our brains generate emotion in order to navigate the complexity of our modern social environments. This insight strives to blend the nascent field of machine learning with the neuroscience of affective psychology in hopes of shedding some light on two of the most exciting fields in academia. The examples and concepts are pulled from the following sources:

(1) Prediction Machines: The Simple Economics of Artificial Intelligence by Professors of Strategic Management at the University of Toronto, Ajay Agrawal, and Joshua Gans with the help from Professor of Marketing, Avi Goldfarb.

(2) How Emotions Are Made: The Secret Life of the Brain by Distinguished Professor of Psychology from Northeastern University, Lisa Feldman Barrett.

To understand the prevailing theories of what emotions are and where they come from, we must first understand the ways we have come to describe our emotions up until this point. Feldman Barrett starts her ambitious quest to understand the true origins of emotion by quickly dismantling the persistent assumptions about emotions that have stood for decades; propagated by psychological titans such as Aristotle, Descartes, Freud, and Darwin. The classical view of emotions is that there exists a “fingerprint” of each emotion, a circuit in our brain that initiates a discrete instance of sadness, or elation, or frustration, etc. These circuits cause a series of movements inside and outside our bodies that communicate — to yourself and the world — the emotion you’re experiencing in that moment. Facial movements, sweat gland activation, changes in heart rate and blood flow are all physiological changes that occur with different emotions. It has been long believed that our emotions are evolutionary adaptations that served as an early means for constructing a theory of mind of someone other than ourselves, predating oral language. A common narrative in the modern West is that emotions are reflexive, beastly in nature, and naturally at odds with our more rational and higher versions of ourselves.

Evangelists of the classical view look to Charles Darwin’s book, The Expression of Emotion in Man and Animals, where he claims that emotions and their expressions were an ancient part of universal human nature. In the 1960s, psychologist Silvan S. Tomkins ushered in a new domain of scientific study called “emotion recognition” by trying to find these emotional fingerprints by generating a set of photographs of carefully coached actors displaying facial expressions of anger, fear, disgust, surprise, sadness, and happiness to see if participants could identify the emotion presented. Whether in America, Japan, Korea, or Papua New Guinea, participants managed to correctly identify the correct emotion and scientists thought it a closed case: facial expressions must be reliable, diagnostic fingerprints of emotions.

Other scientists worried that this method was too indirect, so they opted to try a more objective technique: facial electromyography (EMG) which measured the electrical activity of the facial muscles that precisely identifies which muscles move when during the experience of an emotion. This is when scientists realized something was afoot. Study after study, there was no reliable indication of specific emotions, there was only a demarcation between pleasant and unpleasant feelings at best.

A century’s worth of effort to find “emotional fingerprints” has yielded no fruit with high variability in the associated reactions to particular emotions. The body’s autonomic nervous system seemed to be the last hope for the classical view of emotions; surely patterns of unconscious bodily reactions like heart rate or digestion will emerge to confirm the presence of an emotional fingerprint. Unfortunately, none have been found; someone can experience anger with or without a spike in blood pressure, or stress without an increase in skin conductance (sweatiness). Feldman Barrett received early criticism for her idea that…

“…On different occasions, in different contexts, in different studies, within the same individual and across different individuals, the same emotion category involves different bodily responses. Variation, not uniformity, is the norm.”

She then presents her theory of constructed emotion. The theory starts with the premise that while in a wakeful state, you are constructing a simulation in your brain based on current sensory input. I am not talking red pill/blue pill, this is what our brains are actually doing behind the scenes of consciousness. Our brains absorb each moment, causing changes in our bodies, inside and out, that it thinks are beneficial for our survival, based on all prior experiences similar to the one at hand.

“Think of the last time someone handed you a juicy, red apple. You reached out for it, took a bite, and experienced the tart flavor. During those moments, neurons were firing in the sensory and motor regions of your brain. Motor neurons fire to produce your movements, and sensory neurons fired to process your sensations of the apple, like its red color with a blush of green; its smoothness against your hand; its crisp, floral scent; the audible crunch when you bit into it; and its tangy taste with a hint of sweetness. Other neurons made your mouth water to release enzymes and begin digestion, released cortisol to prepare your body to metabolize the sugars in the apple, perhaps made your stomach churn a bit. But here’s the cool thing: just now, when you read the word “apple,” your brain responded to a certain extent as if an apple were actually present. Your brain combined bits and pieces of knowledge of previous apples you’ve seen and tasted, and changed the firing of neurons in your sensory and motor regions to construct a mental instance of “Apple.”

A mental instance could also be called a concept, which contains information about the object itself, as well as the information contained in other concepts related to that object. Your brain simulates these concepts from the collection of neural patterns that represent your past experiences. Your concepts are the primary tool for your brain to guess the meaning of incoming sensory inputs, meant to help you perceive and flexibly guide your actions in novel situations. Without concepts; the world would simply not make any sense (experiential blindness).

Your brain must also contend with the fact that the body is also a source of sensory input. Purely physical sensations from inside your body like your heart and lungs, your metabolism, your changing temperature, etc. have no objective psychological meaning. That is, until our concepts are put in the mix, and the sensations can take on a whole new meaning.

Take for example the feeling of an ache in your stomach: if you’re sitting at the dinner table, you might identify the sensation as hunger; if flu season is coming up, nausea; if as a judge in a courtroom, that the defendant is guilty.

In a given moment, in a given context, your brain uses concepts to give meaning to internal sensations as well as external sensations from the world, all simultaneously.

This, Feldman Barrett believes, is how emotions are made. An emotion is your brain’s creation of what your bodily sensations mean, in relation to what is going on around you in the world. Hence, theory of constructed emotions. Your brain is making predictions about what is about to happen, your body is constantly adjusting in anticipation of these predictions. With repeated experiences, concepts are formed that cause a neuronal cascade of events that tell us what to feel and what to do about it.

Understanding this foundational mechanism of consciousness, that emotions are simply prediction machines, helps to guide our understanding of machine intelligence where prediction is its pièce de résistance.

Grey beards in the C-suite mostly rely on their decades of experience they’ve encountered over the years to make decisions, implemented top-down within an organization. Artificial intelligence (AI) operates on the same principles, using data from historical events to make informed decisions about the future. But it is the speed and scale at which this is achieved is what makes machine learning one of the most consequential innovations in all of human history. One that can create and one that can destroy. I’ll save the philosophical arguments for another time, but I will try to present the business applications for the kinds of decision making AI enables. Just as emotions are meant to make predictions that serve as prescriptions for action, so too does AI for your business.

AI makes prediction cheap, just as all technological shifts reduce the costs of things that were once expensive. Take light for example: according to economist William Nordhaus, assuming you are reading this under artificial light, it would have costed you 400x more in the 1800s to yield the same level of illumination. You definitely would’ve thought twice before flipping that switch. Our behaviors evolve to take advantage of newly commoditized resources. When Charles Babbage and Ada Lovelace were conceiving of the first computers, they aimed to make arithmetic cheap. Then came the applications of cheap arithmetic that has given us everything from the Moon Landing to TikTok.

As the cost of prediction decreases ever precipitously, the capabilities of these machine learning-enabled systems will expand to usher in a whole new generation of technologies for which we have not evolved enough to fully embrace. Fears of obsolescence in the face AI is no longer a niche concern, but an aphorism of the times.

I will aim to present the parallels between artificial intelligence and emotional intelligence: human intelligence.

Our ability to make clever tools is what led us to become the dominant species on this planet, ever increasing our capabilities and the efficiency with which we use them. Despite our tendency to anthropomorphize (to attribute human characteristics to), we often think of AI as alien, as separate from ourselves, as a threat to our species. But AI is simply another artifact of nature, another tool with which we try to make our lives easier. Once we can see that our emotions — once heralded as primitive, evolutionary adaptations — are functionally similar to the most advanced technology of our time, both designed to make predictions, we can connect deeply with the nature that created us as we bravely move forth as creators ourselves.

Part I — Within Reason: The constraining parameters within which intelligent systems must regulate and optimize their functions.

Have you ever acted hastily and wondered how you could’ve ever allowed yourself to act so childish? Did you end up attributing it to not having eaten yet or having gotten a lack of sleep? Feldman Barrett reminds us that our consciousness is situated inside a physical organism with frequent and familiar bodily needs. The general feeling — be it pleasant or unpleasant — in any given situation, is generated by an automatic process called interoception.

“Interoception is your brain’s representation of all sensations from your internal organs and tissues, the hormones in your blood, and your immune system. …It is in fact one of the core ingredients of emotion.”

This interoception is a careful orchestration of brain networks that issues predictions about your body, test the resulting simulations against sensory input from your body, and updates your brain’s model of your body in the world. Feldman Barrett mentions two distinct parts of this network that help to simplify the discussion. One is a set of brain regions, called the limbic or visceromotor regions, that serve as the body-budgeting network that controls our internal environment in response to our predictions — speed up the heart, slow down the breath, release more cortisol, metabolize more glucose, etc. The second part includes your primary interoceptive cortex. The neurons in the primary interoceptive cortex compare the simulation from your brain to the incoming sensory input, computing any relevant prediction error, completing a loop, and ultimately creating interoceptive sensations; emotions. Together, they form a feedback loop that helps to allocate bodily resources based on your predictions of the environment, informed by all of your past experiences put together.

Previous interpretations of brain activity assumed all 86 billion neurons between our ears acted in response to stimulus. But the brain is by far the most metabolically expensive organ in the body, evolution wouldn’t see it beneficial to have it on standby waiting for a jumpstart. Feldman Barrett explains intrinsic brain activity as the various patterns of structured firing of neurons that do everything from keeping your heart beating, to keeping your lungs breathing, to all your other internal functions running smoothly; a phenomenon that continues from birth until death.

“Intrinsic brain activity is the origin of dreams, daydreams, imagination, mind wandering, and reveries. It also ultimately produces every sensation you experience, including your interoceptive sensations, which are the origins of your most basic pleasant, unpleasant, calm, and jittery feelings.”

Taking your brain’s perspective for a second, it is required to make sense of a very complex array of stimulus that we experience in any given moment, and guide our behavior accordingly. We have a finite attentional capacity, meaning we are unable focus on everything at once. But how does our brain choose what to focus on? Your brain has established a dense recognition network that determines the most likely cause of a sight, sound, smell, etc. based on all of our previous experiences. This is what happens when your brain makes a prediction. Networks of neurons are constantly talking to one another, anticipating what is about to come next; some scientists have considered it the brain’s modus operandi.

Thus efficient, predictive process is your brain’s default way of navigating the world and making sense of it. It generates predictions to perceive and explain everything you see, hear, taste, smell, and touch.

Feldman Barrett argues that evolution wired your brain for prediction. If it were merely reactive just as outdated neuroscientists would argue, it would be too inefficient to keep you alive. We are constantly bombarded with sensory input, and a reactive brain would be far too metabolically expensive because it would require more interconnections than it could maintain.

Sometimes, our predictions and the subsequent mental models generated from them are proven to be wrong. Imagine you are walking through an airport and you take that last step off of the magic airport treadmill, were you surprised by the sudden change of pace? This unexpected result from our predictions is called prediction error. Errors in context of neuroscience are not necessarily bad outcomes, prediction error is one of the main causes of the release of dopamine in the brain because it helps us to update our model of the world. (For further reading on dopamine: The Molecule of More by Daniel Lieberman MD and Michael Long). Dopamine is a neurochemical that is released when our brains recognize something that is worth paying attention to. When our predictions are incorrect, our brains have an opportunity to learn. Imagine the utility of this during our epoch as hunter-gatherers: You come across a berry tree after days of scavenging. If we learn that the berries are safe to eat, we are rewarded with dopamine that not only gives us pleasure, but also helps us store the location of this new berry tree in our brains for future exploitation. Dopamine is nice to have when things are scarce, but this predictive process is going on all the time, wether our stomachs are full or empty. Our physiological comfort or discomfort is the direct reference for what kind of emotions are generated.

Recall our body-budgeting system from the interoceptive network. Much of our conscious experience is lived through the tint of our fleeting emotions, day in and day out. If we are hungry, our body-budgets are stretched thin. This is why we get hangry. We are constantly trying to utilize our emotions to create a social reality with those around us to yield a desirable outcome. When it is almost dinner time and we are running on two cups of coffee and a banana, we might act in a manner that is not a representation of our best selves. We might get impatient and perhaps even raise our voices to get what we want because our interoceptive sensations tell us we cannot waste valuable resources being patient or graceful, we must be brash in order to get what we want. Being hangry is a very real and familiar emotion for all of us. So next time you run into an irritable colleague, it’s best to given them the benefit of the doubt; their body-budget is probably running low.

The theory of constructed emotion also leads to a whole new way about thinking about personal responsibility. Suppose you’re angry with your boss and lash out impulsively, slamming your fist on his desk and calling him and idiot. The classical view of emotions might point to a hypothetical anger circuit, partially absolving you from some responsibility, but constructed emotion theorists would argue that your brain is predictive, not reactive. This is not to say that the brain has no control of these behaviors, there definitely exists a control network for regulating these predictions to make our behavior more sociable and our reflections more tolerable. We have come to know this network as serving our goal of “emotional regulation.”

Your control network helps select between emotion and non-emotion concepts (is this anxiety or indigestion?), between different emotion concepts (is this excitement or fear?), between different goals for an emotion concept (in fear, should I escape or attack?), and between different instances (when running to escape, should I scream or not?).

Feldman Barrett cites that the control network and the interoceptive network serve as the two primary communication hubs throughout the brain. She goes on to posit that these networks are so involved in synchronizing our brains’ activity that it might be a prerequisite of consciousness. Any damage to these hubs has resulted in mental impairments of varying severity.

These automatic, subconscious processes of the brain are what constitute our entire lived experience. In every waking moment, your brain uses past experience, organized as concepts, to guide your actions and give your sensations meaning. Now that we are entering the world of artificial intelligence, a tool built by and for humans, we shall see that this prediction and categorization is not just a felt process, it is also a mathematical one.

Artificial intelligence is a collection of tools meant to perform tasks and nothing more. Tasks are collections of decisions, decisions are based on prediction and judgement and informed by data. The challenge is identifying the correct “objective function,” which is mathanese for “goal.” The computer is trying to optimize a set of parameters that constitute the magical solution to your problem.

“The function we want to minimize or maximize is called the objective function or criterion. When we are minimizing it, we may also call it the cost function, loss function, or error function” (Goodfellow, 2016).

This error function is the prediction of artificial intelligence. The parameters of the error function are considered the significant factors. With datasets often imperfect and incomplete, we are left to make predictions using only the information on factors that we THINK to be significant. Even more constraining, the data used to test the predictive model needs to come from somewhere, so often times a portion of data — say, 20 percent — has to be set aside to validate the predictions produced by the model that was trained on the other 80 percent. Specifically, prediction machines utilize three types of data: (1) training data for training the AI, (2) input data for predicting, and (3) feedback data for improving the prediction accuracy.

PREDICTION is the process of filling in missing information. Prediction takes information you have, often called “data,” and uses it to generate information you don’t have.

What good is cheap prediction for your business? Predictive models are already being used for traditional operational improvements like demand forecasting and inventory. But as these tools become more ubiquitous, things like active translation and autonomous driving will become more capable and perform better than the human versions — not a matter of if but when.

Because it is necessary to determine the relative value of different actions and outcomes, it is imperative to identify your core prediction. This means baking your mission statement and core values straight into your AI systems. This often requires leadership teams to realign their objectives if they are to have a successful AI strategy. Cheap prediction helps to make decisions in uncharted territory and navigating with a compass that points toward True North is a good place to start.

As we have seen, machine learning and emotional intelligence are made possible by predictions. These prediction are constrained by their operating environment. For emotions, the body-budget is the primary constraint. For AI, correctly identifying the relevant variables and degrees of freedom specific to your business is what limits the predictive performance of your model. These predictions are what enable high-level adaptability and functionality in the world of bits as well as the world of atoms. We will now look to how the quality of past experiences affect the predictions of the future and the consequences that can result.

Part II — Walking Backward: Our predictions are only as good as the quality and quantity of previous learning.

Despite millions of years of evolution and the million hands involved in the emergence of AI, there’s just one problem and that’s limited datasets. We are entirely unique in this universe, you are a very particular instance of life by virtue of your DNA and the environment in which you’ve lived out your life. It is this singular perspective that can often be a flaw in our efforts to exercise empathy and compassion. With so much ignorance of the world beyond our horizon, it’s fascinating how we manage to get along at all. Often, it is the myths and stories that we tell that illicit some sort of shared emotion that we humans crave, acting as a medium of societal bonding.

Our experiences, made up of an array of sensory information, is what our brains have to make predictions with; that is it. We take every good, bad, and ugly thing that we have been through and use it to constantly update our model of how the world operates, for better or for worse. But this process is rather energy intensive, so it is imperative that this process be done efficiently by transferring information to the fewest number of neurons possible. How does our brain do this? By separating similarities from differences.

The sensory information from sight is highly redundant…and the same is true for sound , smell, and the other senses. The brain represents this information as patterns of firing neurons, and it advantageous (and efficient) to represent it with as few neurons as possible.

Feldman Barrett reminds us that when we are in a social situation, our actions are what our brains have decided to be the “winning action,” behavior that best fits this situation to yield our desired outcome. This process is called categorization. It also goes by many other names in science: Experience, Perception, Conceptualization, Pattern Completion, Perceptual Inference, Memory, Simulation, Attention, Morality.

Suppose you see a stranger at the mall who you have an intuition is a good friend from time’s past. As you walk closer, you are given more and more evidence that this is your buddy from college, yet you cannot be for certain. While you’re starting to become joyous at the idea of catching up, your brain is also busy predicting what the next few seconds hold by preparing your body for such elation:

First, your cascade of predictions explains why an experience like happiness feels triggered rather than constructed. You’re simulating an instance of “Happiness” even before categorization is complete. Your brain is preparing to execute movements in your face and body before you feel any sense of agency for moving, and is predicting your sensory input before it arrives. So emotions seem to be “happening to” you, when in fact your brain is actively constructing the experience, held in check by the state of the world and your body.

Claude Shannon is considered the father of information theory, which is a collection of mathematical tools used to separate real information from noise using least amount of symbols as required. Information theory is the bedrock of our modern society given our heavy reliance on telecommunication services. Our neurons talk to one another too. Information collected from your senses are parsed through various layers of your brain which reference all other similar sensory episodes you’ve had in your past. The cascading neural pathways that Feldman Barrett is referring to are the “mental maps” that compose our conscious thought and models of the world. Our brain need to encode as much information using the least amount of resources to do so. Enter, heuristic bias. Our judgement, decision making, and meaning making are all subject to pitfalls when our brain is forced to make shortcuts. But it precisely these shortcuts that convey an evolutionary advantage by producing models of the world that were “correct enough” to ensure the continuity of ourselves into the future. How much addition information do we need to determine whether or not that’s a lion in the brush before we start running? Our species has come along way and now “correct enough” is no longer good enough, especially when artificial intelligence has the ability to amplify these shortcuts and exacerbate inequality.

Artificial intelligence is not an alien species that is separate from us, it is simply another tool in the toolkit. But this is our most powerful tool yet. It is a collection of systems built by humans, subject to our mental shortcuts and historical biases. Just as emotions are generating predictions based solely on past experience, artificial intelligence also runs into the same problems. We only have access to so much data. We are now seeing a new gold rush, where data is the new gold. Entrepreneurial spirits during the California Gold Rush got rich by opening mining supply shops versus mining themselves because they knew the banker relies on gold, and the supplier relies on hope. And as we saw, there was plenty more of the latter than the former.

Today, Big Tech gives us these incredibly valuable social networking platforms for free because not only are they the supplier, but they’re also the bank. Remember, if you’re not paying for the product, you are the product. The more data they have, the more complete a picture they can make of you to sell to advertisers. The more data they have, the better their predictions.

The impact of small improvements in prediction accuracy can be deceptive. For example, an improvement from 85 percent to 90 percent accuracy seems more than twice as large as from 98 percent to 99.9 percent (an increase of 5 percentage points compared to 2). However, the former improvement means that mistakes fall by one-third, whereas the latter means mistakes fall by a factor of twenty. In some settings, mistakes falling by a factor of twenty is transformational.

Mathematical models can only be built on finite datasets, therefore must be minimal in operational scope to prevent us from taking answers that come from the super advanced Zoltar as divine truth. Every dataset is incomplete, we must accept this fact. We must conduct due diligence to balance the bias-variance tradeoff. This tradeoff comes into play when trying to minimize two types of error — bias and variance error — in a statistical learning model by varying the amount significance of any given parameter.

Imagine you’re at a gun range. Imagine four different target papers that are reeled in from emptying ten rounds down range. There was one where it completely missed the target, but all ten landed in a tight bundle in one of the corners. Translated into machine learning, this would be high bias, but low variance. There was another sheet that had all ten rounds equally spaced, within about four inches around the center bullseye. This would be considered low bias, but high variance. A professional sharpshooter — low bias, low variance.

Ideally, one would want to build a model that minimizes both bias and variance, but this is near impossible. A model that captures the patterns in the data it has seen but also generalizes well to unseen data is the holy grail of supervised learning, but such a dream is incredibly difficult to realize. The key is selecting the correct features that serve as reliable indicators for your prediction machine. To decrease variance, increase the size of the training dataset. To decrease bias, reducing the number of “predictors” should normally do the trick. The authors advise if you are going to go about building a machine learning architecture, it helps to have deep knowledge of all the relevant degrees of freedom and system dynamics.

There are also hidden risks with data collection that firms should be aware of. Businesses must minimize the expense of data collection in monetary terms and terms of consumer privacy. Additional data has diminishing returns by enhancing prediction accuracy only so much; more data does not always mean more value creation.

Artificial intelligence is limited by the amount and quality of data. So too, our emotions. Our emotions are generated from the inside out, subject to our current state of physiological comfort or discomfort. What are the consequences of our limited personal experience on our expectations of a world beyond our comprehension? What are the consequences of having limited past observations — collected by biased individuals and institutions — inform an unknown future?

What we should be concerned about is the actions that follow these predictions, where the rubber meets the road. The authors argue that AI’s most immediate impact will be seen at the decision level. Tasks are simply collections of decisions. The figure below shows how a task is broken down into discrete but interdependent steps. Recall from earlier that prediction machines need three types of data: Input, Training, and Feedback.

The authors lay out a template for deconstructing workflows into tasks and tasks into decisions. By answering the following questions, you and your team can have an informed discussion about which decisions within which tasks, when automated, would generate the most ROI.

  1. Action = What are you trying to do?
  2. Prediction = What do you need to know to make the decision?
  3. Judgement = How do you value different outcomes and errors?
  4. Outcome = What are your metrics for task success?
  5. Input = What data do you need to run the predictive algorithm?
  6. Training = What data do you need to train the predictive algorithm?
  7. Feedback = How can you use outcomes to improve the algorithm?

We have seen the many challenges facing the implementation of predictions because of the insufficient datasets. In terms of emotion, our lack of experience. But we must reconcile these shortcomings by making an earnest effort to learn as much as we possibly can. In organizations, this is why diversity should be treated as a priority, not an after-thought. We should strive to understand all of the possible edge cases that we could run into in the future.

Part III — Beyond the Shadow of a Doubt: Risks and Tradeoffs

One important strategic tradeoff firms have to make concerns the rate of adoption. Incumbents might be tempted to “wait and see” while the incentives for startups to adopt AI systems will be much greater. They may perform poorly at first, but as prediction machines learn, adapt, and improve, hard-coded machines will prove unworthy competition in this brave new world of artificial intelligence. Another strategic risk the authors mention is the timing, when to release the AI tools into the wild. While commercial use allows access to real operating conditions, vastly more data, and potentially dangerous edge cases, there can be even more risk in releasing an AI system that is not sufficiently trained. These risks include repetitional damage and/or consumer safety that has many firms second guessing. It all comes down to whether or not the benefits of faster learning outweigh poor performance.

The authors go on to describe what they believe to be the relevant AI risks facing us today:

  1. Predictions from AIs can lead to discrimination. Even if such discrimination is inadvertent, it creates liability.
  2. AIs are ineffective when data is sparse. This creates quality risk, particularly the “unknown known” type, in which a prediction is provided with confidence, but is false.
  3. Incorrect input data can fool prediction machines, leaving their users vulnerable to attack by hackers.
  4. Just as in biodiversity, the diversity of prediction machines involves a trade-off between individual- and system-level outcomes. Less diversity may benefit individual-level performance, but increase the risk of massive failure.
  5. Prediction machines can be interrogated, exposing you to intellectual property theft and to attackers who can identify weaknesses.
  6. Feedback can be manipulated so that prediction machines learn destructive behavior.

A key strategic implementation of AI is where the uncertainty of the business lies, at the boundaries of a firm where it interfaces with outside vendors and business partners. Prediction machines will make it easier to draft contracts that outsource their labor or capital equipment because of greater knowledge of the various ebbs and flows of demand. This would be the edge of AI’s impact on an organization because these systems still perform rather poorly compared to their human counterparts on judging the value of an outcome.

Producing better predictions means the value of judgement increases. Prediction machines that lower the cost of prediction also increase the value of understanding the rewards associated with actions. While we could in theory pass the responsibility of judgement making to the AI, this is too costly to code upfront and is more efficient if the prediction is passed to a human that then applies the judgement. The authors anticipate a rise in “human prediction by exception” whereby machines generate most predictions because they are predicated on routine, regular data, but rare events should be supplemented by human assistance.

Prediction machines are capable of making sense of complex decisions but one challenge will remain steadfast. This is the challenge of human judgement. We are the ones who decide how and when these results implemented. Judgement can vary widely from firm to firm, predicated on their access to quality data and the goals with which the human assistants are optimizing for.

This challenge is uniquely human. In terms of affective neuroscience, judgement is also known as the emotion elicited by consciousness. Just as Shakespeare said in Hamlet, “…there is nothing either good or bad, but thinking makes it so.” This affective judgement is not wholly universal as the Classicists would’ve liked to believe. How our minds generate emotion may be similar but the words assigned to these sensations can vary widely from culture to culture. This is why I think that the study of foreign languages can present the fullest range of human experience, of suffering and elation. By acknowledging the cultural wisdom embedded in the language used to talk about emotion, we can empathize to a much greater extent versus just assuming emotion is the same all around the world. This is where we realize that a monolithic future of global English adoption is not predestined nor desirable.

Feldman Barrett shows us that the function of emotion is incredibly similar to the function of artificial intelligence. They both serve to (1) make meaning, (2) prescribe action, (3) serve as conduits of influence. Emotions help to create a social reality from which all civilization is built upon. Emotions enable cooperation through shared concepts of the world, also known as collective intentionality. We actively invent our own reality by ascribing different words to various concepts that influence how we think and how we feel. Beyond collective intentionality, emotions and the words used to describe concepts are what enable mental inference: figuring out the intentions, goals, and beliefs of others. Despite being agnostic, I can appreciate how religion helped us to organize these conduits of social influence and enable cooperation beyond our immediate kin. Emotions have influence in the sense they have the ability to regulate your body budget, which actively creates your social reality. In terms of artificial intelligence, this influence comes from the ability to affect the decisions of the humans that create and use them — for better or for worse.

In the age of disinformation, we see the unravelling of social reality because we are losing our sense of collective intentionality. Our mental models of the world are becoming so fragmented that it seems cooperation is devolving into chaotic disorder. Yet I remain optimistic of our ability to create tools that allow us to establish a ground truth from which thoughtful disagreement — progress — can spring. I hope our exploration of human emotion and artificial intelligence has dulled any luddite pessimism for the bright future we have ahead of us. I also hope this leads you, my dear reader, to practice more patience with our human and computational co-inhabitants as we collectively try to get ourselves out of this looming climate crisis and usher in a future where our kids are allowed to have their own problems rather than inheriting ours.

If you have made it this far, thank you for taking the time, it means more to me than you will ever know. Merry Christmas and Happy Holidays!

--

--