The brain, the computer, and the economy

“The brain, the computer, and the economy: all three are devices whose purpose is to solve fundamental information problems in coordinating the activities of individual units – the neurons, the transistors, or individual people.” Robert J. Schiller

I have a love-hate relationship with the idea of neuroeconomics. The materialist neuroscience side of my brain likes the idea that behavior – even behavior resulting from emergent properties of complex networks – is quantifiable and predictable. It’s only predictable if you know all the input parameters (and you can’t know that Subject X has an aversion to green for reasons that have something to do with a lollipop at Coney Island when he was six). But the central fallacy of economics has been the “rational actor” paradigm, which is based on the assumption that individuals make rational choices when it comes to money and will always behave to maximize their own economic interests. They don’t. Economist with a clue understand this. Really smart economists are trying to understand the underlying why and how. Let’s start with the experimental result from psychology showing that humans are more likely to make a bad economic decision out of fear of loss than they are like to make that decision out of hope of gain. Does information have any effect?

Wall Street has hired any number of “quants” – people with PhD-level academic backgrounds in physics or math. It makes a certain intuitive sense, because it’s all about numbers, measuring, modeling. Biologists generally are much more comfortable working with messy parameters, and understanding what they can and cannot control within an experiment. Most of them, however, find the world of banking/finance fairly inscrutable. The human decisions that underlie the actual behavior don’t always follow “rational actor” predictions because of the influence of the messier parameters, such as treating noise as valuable information, and thus the role of fear in the markets. Individual people think that they’re making rational decisions, which is why the rational actor paradigm is so appealing. But pure mathematical maximization paradigms don’t seem to follow reality always. For example, we’re generally wired for “fear of loss” to have a significantly stronger influence on decision making than “hope of gain”, yet behaviors based on fear of loss often end up causing, or at least ensuring, loss. Do the standard quantitative models take something like that into account?

A few people have done work with including information theory into game theory, but not in the context of market behavior, at least not beyond a two-player scenario that included Nash equilibria. In the latter case, one of the questions explored was the influence of noise, or when a player perceives information as noise and disregards it. The converse, however, is also true, perceiving noise as information, and finding some rationalized (not rational) pattern and acting on that imposed pattern. Humans are phenomenal pattern finders, better than computers, but the problem is that we also find patterns where none exist, because it’s what we’re wired to do. We use the ability to perceive and believe in non-existent patterns in order to rationalize our behavior. This latter point of mistaking noise for signal hasn’t been modeled to the same degree, largely because it’s a messy question and hard to quantify.

Economics experiments demonstrating loss aversion first showed up in the academic econonmics literature about 20 years ago by a pair of academics, Kahneman and Tversky, who had a very strong grounding in the psychology of decision making. Daniel Kahneman won the Nobel for something known as prospect theory. Kahneman was not an economist, by the way. My neuroscience perspective brought me to these questions from the other direction – what is it about how our brains work that leads to these behaviors? Turns out there is a good amount of literature on the behavioral output itself from an economics point of view, and prospect theory seems intended to take into account the behaviors that deviate from the theoretical expectation that people will always behave to maximize wealth.

It would be very interesting to go back to that original question: how does information – or the perceived value of any given piece of information – affect the decision making? Humans are so good at creating patterns and meaning where none objectively exist, often doing so to justify a decision based more on biases in our thinking than on facts. This sounds linear and conscious, but it’s not. It’s a feed back loop, and I would guess that some of the biases like loss aversion may have a component of seeing patterns where none exist.

Think about that next time you make a decision. Does the pattern you think you’re seeing really exist?

But why does this happen? Next time, split-brain patients, neuroimaging, and creating unconscious biases on purpose.

Be Sociable, Share!
This entry was posted in Wetware and Hard Science. Bookmark the permalink. Both comments and trackbacks are currently closed.


  1. John says:


    I like this.

    It may prompt a longish response from me at some point regarding my utter dumbfoundness at the stupidity of (some parts of) standard economic theory when I began work on a Master’s degree in agricultural economics in 1978. Not all economic theory was balderdash, but much of it was. It relied on stupid assumptions like “rational actors” and “perfect information”. It drove me nuts.

    I wasn’t able to formulate many of my objections in a coherent way during class discussions, but some of them I could. I remember once when we were talking about the concept of “opportunity cost” — the value forgone by not putting an asset, for example, land, to other uses. I said that in some cases, that concept is meaningless. For example, if holding onto the family farm is a farmer’s highest priority in life — such that if he lost the farm he would kill himself — then there is NO cost to him of not selling out to Wal-Mart. It’s like saying, “what is the opportunity cost of not tearing down the Western Wall and the Temple Mount and putting a Wal-Mart in Jerusalem?”. It’s just stupid. It adds no useful insight into anything. “If we’re going to measure the opportunity cost of not selling the land, what about the opportunity cost of not grinding up the farmer’s children and using them for pig feed?” Common sense can often mislead you, but if your whole theory is based on a lot of assumptions that violate common sense and *have no empirical evidence to back them up*, then maybe your theory is a crock.

    This argument and others like it convinced a few of my classmates that some doctrinaire economic theory was sketchy at the edges, but it only convinced my professors that I didn’t understand economics. I remember also arguing for a non-linear, relativistic concept of money—for clearly for the very very rich there is no difference between money and power, and for the very very poor *any* economic transaction is made under what would be called “duress” if faced by wealthy actors. (No, I’m not talking about “marginal utility”. I understand that. I’m talking something Einsteinian; as in, just as the definition of a unit of time depends on the inertial frame of reference in which you measure it, so does the definition of a dollar. In other words, the maxim that “a dollar is a dollar is a dollar” is false.) This argument also proved to my professors that I was stupid and didn’t understand economics. And so forth.

    Anyway, about 15 to 20 years later people started winning Nobel Prizes in economics for pointing out how Economics was a lot more intuitive and useful if you stopped requiring stupid baseline assumptions and started trying to match theory to the actual world of actual people, as you do, Peg in this article.

    Maybe I’ll try to write this up in longer form after Christmas, but meanwhile let me just state that I would love to see more articles from you on this general theme.

  2. Peg. says:

    Very interesting, John! Some of my thinking about money and banking was formed by conversations on airplanes. One was with a seatmate from Bear Sterns before it went went up on the rocks. He pretty clearly was riding the wave, and knew it, but wasn’t thinking about what was under the water. The other was with a recent conversation with a high-level VP from another company that I will not name, but who had very wisely figured out where the rocks were and avoided them. Some of the above is from emails I sent him. What struck me about both conversations was that they had no trouble with the following sentiment:

    Money’s consensual hallucination depends on a surplus of trust over fear. The more people share the hallucination, the more solid the money; the more they doubt it, the more it fades. from a Salon article by Scott Rosenberg. He’s talking about digital money and online commerce, but this has been the underlying truth of all markets–it’s about what we believe.

    People are starting to get an inkling of this, in some ways, which is why, IMO, Ron Paul’s gold standard discussion resonates. Ayn Rand’s “objectivism” seems like the sensible opposite of “hallucination”. But, like many things, it’s just not that simple. We are not rational creatures unless we put in the effort to get reliable information and to think about it. Even then, our filtering of what information is reliable and our cognitive biases in how we process it have a very strong influence on the results.

  3. Stearns says:

    If decision-making is subjective to perception, then it is not only relatively different among individuals, but also varying in time. A deeper economic understanding will require some treatment of the rate of change of economic momentum (whatever that my prove to be):

  4. Peg. says:

    Howard, that’s a really interesting post. The question you raise about varying in time is also important. In time one accumulates more experiences that can change the cognitive biases. (Thus my tangental interest in Less Wrong.) And the milieu in which the decisions are made–the economic momentum–would absolutely have an effect.

    Decisions will also vary with physiological and emotional state, and thus the HALT rule: “Never make a decisions when you’re hungry, angry, lonely, or tired.”

  5. Helen says:

    I have an anecdote that I like to use to demonstrate everything I hate about game theory as practiced by economists. I took the first course on game theory offered at my alma mater (which y’all know but I won’t mention here), ECO199. One problem set proposed the following scenario:

    There is a game show that features two contestants, A and B. They alternate answering trivia questions, 100 in all. Each question is worth some amount of money that goes into a prize pot. A goes first. If A answers her question correctly, she has the choice of ending the game and taking home the pot, or continuing the game. If she chooses to continue, B gets a question and if he answers correctly, the value of the pot increases and he has the same choice. Whoever stops the game claims the entire pot; the other player gets nothing. Assume A and B are both knowledgable enough that they are assured of answering all the questions correctly.

    The problem set then posed a two-part question:
    (A) Assuming A and B are perfectly rational, what would happen?
    (B) What do you think would really happen?

    The answer to (A) is of course that A answers the first question and stops the game right there. If each player chose to continue at each step, after question 99 A would stop the game, since she’ll never have another opportunity to claim the pot, and B would get everything. Knowing this, B would stop the game at question 98, and it unravels from there to the point where question 1 is the only point at which A will have a chance at the pot, and thus she’ll take it, even if the winnings at that stage are much smaller than they might be later on.

    The answer to question (B) was equally clear to me: There’s no chance A would stop after question 1, nor B after question 2. I suspected they’d probably keep going into the 70s or so, balancing the risk of getting nothing with the potential gain of being able to push it for one more round. Just about everyone I knew in the class answered it in roughly this way.

    When we got our assignments back, we were given full credit for part (A) and none for part (B). Which we all found pretty insulting, not only because it seemed that we were not allowed to believe that people were anything but perfectly “rational” actors, but that an econ prof had the nerve to tell us we answered wrong to a question of the form “what do you think?”

    Either he knew what we thought better than we did, or we were not rational enough for him. Or there was a third possibility: He was full of shit. (My classmates preferred this theory.)

  6. Peg. says:

    I’m with you and your classmates, Helen. Since you took that class, economists have started doing more experiments with actual people.

  • Connect With Us

    Follow Wetmachine on Twitter!


If you do not have an account: Register