Unchecked: The architecture of disinformation
Misinformation and disinformation thrive in today’s technology landscape, and arguably present the greatest threat to modern society. Information architecture – the practice of designing and managing digital spaces – has an opportunity to intervene. This podcast looks at disinformation from an information architecture perspective, and considers ways to expand the practice of IA to address this new reality.
•••
What is Information Architecture? Information architecture is the practice of designing virtual structures – the shape and form of online spaces and digital products. When you click on a navigation menu or follow the steps in a process, you're experiencing the information architecture of a web site or digital product.
•••
What is disinformation? Understanding disinformation is the purpose of this podcast. We are trying to figure out exactly what it is and what it means. If information architecture is the practice of designing virtual spaces, then disinformation is something that can occupy that space to disrupt the user's experience. Alternatively, it is a way of manipulating the space (like flooding it with irrelevant facts) to achieve an end unrelated to the space's original intention.
Unchecked: The architecture of disinformation
Episode 5: Disinformation and cognitive bias, with David Dylan Thomas
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
SYNOPSIS
David Dylan Thomas joins Rachel and Dan to talk about cognitive biases. David explains the fundamental attribution error, the framing effect, and the confirmation bias. All of these contribute to a skewed perception in which a person’s misconceptions about the world can be reinforced or exploited. The conversation leads Rachel to suggest the lens of manipulation and Dan the lens of belonging.
+ + +
STORIES OF DISINFORMATION
Mary Toft
- Dan quotes from Mary Toft; or the Rabbit Queen, by Dexter Palmer
- The true story of Mary Toft, 18th century medical hoaxer (wikipedia)
AI Policy
- Directive to remove AI Safety from agreement on cooperative research and development agreement (Wired)
- News story about executive order on woke AI (Wall Street Journal)
- The executive order to “prevent woke AI” (Whitehouse.gov)
+ + +
INTERVIEW WITH DAVID DYLAN THOMAS
- David Dylan Thomas
- Design for Cognitive Bias
- The Cognitive Bias Podcast
- Story from The Hill about Facebook algorithm
- White Meat, the movie
+ + +
LENSES
Manipulation
Content can manipulate a person’s hope and joy, and anger and fear, triggering a physiological reaction. But the content of a system is not solely responsible for this effect on users.
- What role does the ecosystem play in framing and presenting information in a way to garner an emotional response?
- How might someone use the system to elicit a powerful emotion?
- What emotions is the system trying to exploit?
- How does the system measure and/or classify information based on its level of physiological activation?
Belonging
The intent of disinformation is often to make someone feel like they belong to a community.
- What role does belonging play in the system?
- What aspects of the system (labeling, categorization) create a sense of belonging?
- How does the system take advantage of a person belonging to it?
- How does the system model users as information objects?
- What is the emotional component of the relationships represented in the information space?
_____________________________________________________
Personnel
- Dan Brown, Host
- Rachel Price, Host
- Emily Duncan, Editor
Music
- Turtle Up Fool, by Elliot
_____________________________________________________
Unchecked is a production of Curious Squid
Curious Squid is a digital design consulting firm specializing in information architecture, user experience, and product design
Can you remember what the web was like pre-algorithm? Like just discovering stuff, having nothing, nothing handed to you.
SPEAKER_00You're listening to Unchecked, the podcast about the architecture of disinformation with Dan Brown and Rachel Price.
RachelHey Dan.
DanHello, Rachel.
RachelHow are things?
DanUh good. We had what I will call a mini hiatus because we you and I both went on.
RachelYou were very European.
DanWhere did you go on vacation?
RachelI went to the Oregon Coast.
DanThat is one of my favorite places in the world.
RachelYeah. I'm going back to the Oregon Coast in August for a second vacation. YOLO, Dan. But you went somewhere much farther away.
DanIt's true. It's true. My family and I went to the UK for 10 days. It was delightful. My father is a huge Anglophile, so I spent a lot of time in my childhood. We would vacation there sometimes, but he would just he loves English culture. And you know, so there was part of me as an adult that was like, okay, whatever. But I we went, and despite my best efforts, I loved it. I loved everything about it. Certainly when I was a kid, uh, there was a reputation that English food isn't very good, but we had really great meals there. We spent a few days in London, a few days in Bath, and then my wife has family along the south coast, so we went to go see them and just had a really delightful time. The highlight for me was going to see the Roman baths in Bath. And they've built this enormous museum. The four of us are huge history nerds. So we got the audio tour and we were just locked in. And we just spent about two hours wandering around this museum. I had such a great time.
RachelThat's awesome. Dan, what story do you have for today?
DanI have this story, and I'm a little worried about it because it's going to be really good.
RachelHold on, let me get my.
DanYou and I were first planning this. I asked you, you know, misinformation, disinformation, it's a pretty big topic and can cover a lot of things. What is it that you want to avoid? What's kind of our boundary? And you were like, I don't want to get into conspiracy theories. Like that feels like the line. I swear to God, Rachel, that is just in my head almost all the time. But I got this book recommendation from our good friend, information architect Stalwart, Carl Fast. When I told him we were doing this, he recommended this book called Mary Toft, Queen of Rabbits. Mary Toft was an actual person, so it's a historical fiction novel. She was around in the early 18th century, and in the 1720s, she gave birth to 17 rabbits.
RachelI'm just staring at you.
DanI was expecting a little bit more of a reaction.
RachelSeventeen rabbits.
DanShe was the marvel of the medical and spiritual communities in London. The king even took an interest in her case. And of course, it turned out to be a complete and total and utter hoax.
RachelNo.
DanBut for a few months, it was all London society could talk about. Dexter Palmer is the name of the author who wrote this fictionalized account. It's a really nicely written book. He's really breathed life into these historical characters.
RachelWhat is otherwise an extremely boring account of a woman having 17 rabbit babies?
DanSo we follow John Howard, who is the doctor in the town just outside of London, where this woman lived and gave birth to rabbits. And we follow uh Howard and his apprentice, a young young man named Zachary. After they witness the birth of the rabbits, Howard sends a letter to London, as one does, to some prominent physicians. And of course, all of them then descend on the town. The doctors are captivated by this case and they concoct all kinds of theories as to why this woman would be giving birth to rabbits. And we start to get a sense that part of this is their ambition, right? There's kind of like wanting to make a name for themselves. And this is our first hint that we're trying to create more than is really there, that our understanding of a thing often comes from what we want to believe about it. Which is why I'm bringing up here. I think it kind of tees up our conversation with David Dylan Thomas shortly. We're going to talk to him about cognitive bias. Of course, that concept wasn't around in the 18th century. Anyway, word gets to King George I. This is true, and they arrange for Mary to come to London so the king can see her. And they keep her there and they keep her kind of sequestered for about a week. And of course, no more rabbits come because they're sort of keeping constant watch over her. But in Dexter Palmer's fictionalized account, he sort of describes this sort of group of people that gathers in front of the building where she's staying, and they kind of hold this sort of vigil. And he provides these four or five first-person accounts of these folks who kind of show up and are mesmerized or captivated by Mary and what she represents, and how they find solace, even healing, in her presence. It's not only that the people want to believe that it is true, but also that they see what they want to in the situation. It's like they're holding up a mirror, but it's a mirror that reveals a better version of themselves. Can you hear my cat making a message?
RachelI was gonna ask if that was an animal.
DanThat was the cat who gave birth to many kittens. That was not a spiritual journey for us.
RachelThat wasn't a spiritual journey.
DanNo, it was a sanitation journey for us. Anyway, it's not hard to see misinformation in this way, this kind of notion that when we look at a piece of information, we often see what we want to. Anyway, the doctors eventually realize it's a hoax, and there's this chapter where John Howard, this original doctor, goes to Mary to implore her to tell the truth, right? To reveal that this has been a hoax. This chapter is phenomenal, and it's this kind of meditation on truth and truthfulness. I could read any any of those lines now, but I picked one out that I really, really liked. And if the cat lets me, I will read it to you now. So imagine this is John Howard, he's realized that Mary is hoaxing them, and he says, You only had to deceive me once, but I had to deceive myself an uncountable number of times after that, day after day, minute by minute, maintaining my own constant vigil against an incursion of common sense that I had to force myself to see as its opposite, as a lack of faith. You know, we sometimes paint people who are kind of the victims of misinformation as passive consumers of this, but he as a victim of misinformation acknowledges he played a very active role in actively deciding to ignore all of his common sense that suggests that this was not real.
RachelWow, that's powerful. Thank you.
DanWhat story do you have for us?
RachelLess entertaining by far. I'm gonna talk about this in very broad strokes. In March of 2025, there was a directive from NIST that eliminated mentions of quote AI safety and quote AI fairness and quote responsible AI from its agreement on cooperative research and development for this thing called AI Safety Institute Consortium. So basically, this is saying that these things are no longer important as we research AI. So essentially, from my understanding, this kind of defunds AI bias reduction research. That was in March of 2025. Now, as of July 21st, 2025, according to the Wall Street Journal, the Trump administration was preparing supposedly an executive order targeting, quote, woke AI, claiming that they actually are interested in AI bias reduction work, just not in the direction we were previously thinking. So I pulled these two articles together for today's story because we can watch this framing change over a pretty small window of time. The current administration has moved from framing bias and like fairness in AI as a thing that is extra and doesn't need to be understood or researched. And then we watched them move that frame to, oh, actually, bias is really important. The issue is of left-leaning bias that needs to be eradicated. I happen to know this topic is going to be covered in today's interview with Dave, so I pulled it in here today. I'm just gonna leave that there for you as a little bit of foreshadowing.
DanUh, we're gonna talk to David Dylan Thomas in just a sec. Anything you want to tell the folks about the conversation. You're about to have so much fun. It's so great. I'm really excited because we get to talk to uh David Dillon Thomas. You might know David as the author of Design for Cognitive Bias. He's also the creator and host of the Cognitive Bias podcast. He's been practicing content strategy and UX design for 20 some odd years. But he also lives a double life as a filmmaker, which I hope we'll get to talk about later. David, thank you so much for joining us.
SPEAKER_02Great to be here, Dan.
DanRachel and I have been talking about uh misinformation and disinformation as not just kind of about the information itself, but about the information ecosystem, sort of the spaces in which information lives. And that disinformation is often about manipulating those spaces. A big part of the information space is the people who are participating in it, right? And they bring their own, my people might say, Michigas to those spaces, right? They bring their own stuff. When Ritz, when I first started thinking about this podcast, we really wanted to bring you on to talk cognitive bias. So as you think about sort of people participating in information spaces, obviously cognitive bias plays a role. Can you talk about, in terms of some of the research that you've done, is there a particular cognitive bias that comes to mind that you feel like makes people susceptible to misinformation?
SPEAKER_02Oh yeah. Well, first off, I just want to say that I feel like cognitive mishigas is so much more descriptive than cognitive bias. Bias makes it sound all scientific, but in real human terms, what it is when you witness it is definitely cognitive mishigas.
DanSo we'll do a book together.
SPEAKER_02Exactly, exactly. So the cognitive bias, I think, first and foremost, you're going to encounter in misinformation is confirmation bias. And basically it's, it's, it's what you think it is, right? It's one of the more popular biases. You get an idea in your head and you really only look for evidence to support that idea. And if you ever see evidence that doesn't support that idea, you yell fake news and you move on. The example I always like to come back to is the lead up to the Iraq War. The whole deal was, oh my god, there are weapons of mass destruction. We got to get in there before he gets us. And we get in there, and even the president of the United States, who was like, oh yeah, there's weapons of mass destruction, three months in, it's like, yeah, we didn't find anything. And you'd expect the number of people who believe there are weapons of mass destruction to go down, but it stays high. Even 14 years later, if you got like 50% of Republicans believing there are weapons of mass destruction, and 30% of Democrats believing there are weapons of mass destruction. And it's like, this thing comes in cycles, man, because when it was looking like we might go to war with Iran, you know, this month, it's a lot of the same. Like, well, you got to do this and get them before they do that. Like a lot of the same drum beat, but that drumbeat usually comes down to confirmation bias. You get this idea in your head and you really only look for information to confirm it. And you can see how misinformation basically is the fuel for that. The misinformation is that fake evidence you are clinging to because it supports your belief.
DanYou know, we talk about the algorithm a lot and the algorithm as sort of part of social media. And it feels like the algorithm is really an enabler for confirmation bias. Can you talk a little bit about that? Like, is that something you looked at?
SPEAKER_02So I should clarify when I say like my research, this is me looking at other people's research and writing a book about it, which is still research, but I don't want people to think like I'm I'm on like a lab coat. So think of your own brain as an algorithm, right? Because the reason confirmation bias happens in the first place, the reason any bias happens in the first place, is because you, me, everyone we know has to make like a trillion decisions every day. Even right now, I'm making decisions about how fast to talk or what to do with my hands. If I had to think about every single one of those decisions, I would never get anything done. So most of our lives are on autopilot. That's a good thing, but sometimes the autopilot gets it wrong. We call that bias. That is basically an algorithm, right? The term really is just like something that you can run in the background, right? That doesn't require active thought. So the algorithm, if you will, for confirmation bias is okay, everybody, we've decided this is what we believe. So when we're deciding what information to take in, because we can't take it all in, it's got to go through this filter that says, yes, this is confirming what I believe in. And in fact, that's like all you see. Like highlight that. Oh, yeah. The trees are out to get us. Okay, I'm gonna look around now. Oh, that tree looks suspicious, right? That isn't me actively thinking about is that tree suspicious? That's me just deciding, oh yeah, that tree is suspicious. Why? Okay, I will backtrack now and decide why, right? So that is like our own natural algorithm. The mechanical version of that is, hey, I'm gonna write a program that says every time you see a tree, bump it to the top of the page and make it like red with like fire on it so people get scared of it. And if someone starts out looking for videos about how to fix your washer when it breaks, uh, three videos from then, I want them looking at like all the Nazi propaganda about washers, right? I mean, Grok is probably the low-hanging fruit here where it's like, hey, maybe we want more South Africans to emigrate. So maybe Grok, make sure every answer, no matter what you're answering, includes something about South African white genocide. Right. Just for kicks.
RachelFor context, Grok being the AI currently being deployed on X, which we all know as Twitter. Right.
SPEAKER_02That is the one thing I'm willing to dead name is.
DanIn the context of our modern information landscape, can you talk a little bit about some of the sort of mechanisms in place that lend itself to misinformation, to falsehoods being kind of the currency of it?
SPEAKER_02Yeah, I mean the two plus two is four here. It's like two is confirmation bias, and then the other two is framing effect. Framing effect is, in my opinion, the most dangerous bias in the world. And the way it works is if you see a sign in a store that says beef 95% lean and another sign that says beef 5% fat, people are gonna line up for the 95% lean, right? But it's the same thing. I just framed it in a certain way. And that's like honest framing, right? That's me actually telling the truth and still framing it. I can frame anything anyway. If I make some ridiculous claim that uh Haitians are eating their pets, that is a frame to put on immigration. And if your confirmation bias is already looking for reasons to be afraid of immigrants, oh, okay, now two plus two equals four. I've got the outcome, which is I want you afraid of immigrants. And I want you to continue to be afraid of immigrants and vote for any policy that hurts them, right? Or any candidate who hurts them, right? That's the the outcome of those two biases at work. So uh the algorithm, of course, then is going to highlight any fear-mongering around immigration. That's what you do. It's a cycle. One of the smartest things anyone's ever said, I forget who said it, about legislating the web is I don't necessarily think you should legislate the content. Like that's a really slippery slope. I think you should legislate the algorithm and what their algorithm is highlighting. And in a way, that's both more practical, because the algorithm is a sort of sort of a single thing I can look at and inspect and kind of know how it's treating the content, even if we want to pretend all content is neutral. Uh, but it's also the thing that the company is in fact responsible for creating, right? Facebook doesn't create all the posts, that's mostly robots, basically doesn't create all the posts, right? But they do control who sees what. Same thing with YouTube, same thing with that, same thing with almost anything out there that has an algorithm. The company did not create the content the algorithm is sorting, but they absolutely created the algorithm. And I think given what we know about cognitive bias, absolutely you should be held accountable for the algorithm the same way that, you know, the software that decides whether or not that car is viable to be out on the road or whether that nuclear power plant is safe. That's the level of responsibility you should have for the algorithm.
DanThat's a really interesting point because a lot of the way these platforms try and slip out of taking any responsibility for things is saying, well, we're not creating the content. But you're absolutely right. The algorithm is something that they've created, not just created, like fine-tuned to allow them to maximize profit.
RachelYeah, this is about monetization. Absolutely.
DanAnd they've not only created it, but it's sort of like their own editorial board, right? It's sort of their own mechanism for making decisions about what people see. Just like with a newspaper, we can hold their editorial board accountable for the kinds of things that appear in there.
SPEAKER_02Yeah, and I like that comparison because I think one of my big issues with AI is just the AI and the algorithm are starting to become inseparable. Or the algorithm itself is AI driven, or AI is acting as the algorithm. In any case, with AI, like so, if if someone prints something in their newspaper that's blatantly false or is highlighting one story and not talking about another, like we talk about that all the time right now. We have specific people on boards funding or not funding those. Like we have a clear line of sight, like with what's happening with CDS right now, we have a clear line of sight. There's this person, there's this contract, there's this thing happening, and we can be very clear about who we think is responsible for what. With an AI, okay, the AI decides that this is the answer to my question, and this has actually happened. Uh, I show the AI a picture of a mushroom and it says it's safe to eat, and it turns out that mushroom is straight up killer. Do I put the AI in jail? Who am I going after exactly then, right? There's many degrees of separation. Now, especially when you throw in that for some of these AIs and these large language models, like the people who make it will tell you, I can't tell you how it arrived at that decision. I live we don't even have the science to tell you how it arrived at that decision.
DanLike it's basically rolling dice. Yes.
SPEAKER_02In a way, it's like from a liability standpoint, it's the golden goose. Like, you can't, don't blame me. The AI came up with it, right? All of which to say, like, going back to the like who should we hold accountable for what? When it is that algorithm, I can say you built that algorithm. And the person who paid for it has responsible, the person who built it is responsible, the person who approved it is responsible, right? Once you start getting into AI, it starts to become this abstract thing of like, oh, I am five degrees removed from how it made that decision. I just sort of gave it a prompt and it went.
RachelWe talked for a second about the algorithm is about the money, right? This is how these companies make their profit. When we spoke earlier, we started to talk about the business incentives of misinformation or business models that pretty much rest on misinformation.
SPEAKER_02Oh yeah, this is terrifying. So there is this graph. Basically, it's a graph created by Facebook in 2018. The X-axis is basically going from okay content, fine, you know, content that's perfectly fine, reproved content, you know, your cat videos, whatnot. And as you go further uh down that line, you get prohibited content. So misinformation, hate speech, and so on. And then so that's the x-axis, and then the y-axis is is basically engagement, you know, zero to sixty engagement. And that thing, as you get closer to whatever line you have around your you've now gone from okay content to prohibited content, that engagement hockey sticks, right? And it doesn't matter where you draw the line, which is its own little psych experiment. It's like as long as you start getting into prohibited content, all of a sudden engagement just spikes. Now, if you're in the business of satisfying advertisers, and advertisers are more satisfied when there's more engagement, and there's more engagement when there's misinformation and hate speech, guess what business you're in? The business incentive is in fact to do misinformation and hate speech. Your business model at that point is based on misinformation and hate speech, and not for nothing. The second administration was in place that was cool with getting rid of fact-checking, that was cool with hate speech, Zuck and a bunch of these other folks are like, okay, guess what? We're gonna lean into that now. And again, like the these tech tools are not neutral. Uh they can be gamed to support really whatever you want them to support. There is definitely a business model and has been for a very long time, around more hate speech is good, more misinformation is good.
DanIs there a cognitive bias around the fact that we humans are drawn towards controversy titillation? Like, is there something there that's like our brains love that stuff for some reason?
SPEAKER_02It's complicated. There was an experiment, I think this is out of Stanford, where they basically were trying to figure out what would motivate people to share content, which is a huge metric for engagement. And by the way, I should say when I say engagement, what I really mean is the measurements we have for engagement, because nobody can define engagement. Like it is such a bullshit term. But the things that people are getting their bonuses right is are we seeing more shares? Are we seeing more comments? Are we seeing more whatever? Like that's what we're really talking about. So it's Really important to understand what makes someone more likely to share content. And what it turns out is uh content that helps inspire what they call arousal emotions is more likely to be shared. And an arousal emotion is basically an emotion that gets your physiology increased. So your heart starts pumping faster, right? You might have an adrenaline reaction, but something like that. Fear does that, anger does that, but so does joy, so does wonder. The only things that don't do that are like sorrow, sadness. So that actually depresses, you your heart beats slower, right? Stuff like that. So they were like, huh, is it the emotion or is it just the physiology? So they basically did an experiment where they had people either sit on their butt for an hour and then look at content, or like get on an extra bike or something for an hour and then look at content. The people who were on the extra bike, no matter what the emotional valence of the content, were more likely to share it. So it's like, all we really need, honestly, if we want to share more content, is exercise more. All of which to say, to me, it suggests a political component. Because if the truth is, on Spiran content, those cat videos, like they do get shared quite a lot. And it's not because they're making people scared or angry, it's because they're making people joyful and hopeful. You can make just as much money putting out all that joyful and hopeful content. So it suggests the political component of, oh yeah, but a joyful populace isn't gonna help me get elected.
DanWe are recording this the day after South Park dropped their new episode. And I cannot separate what you are saying from my family's experience of that South Park episode. I mean, it was three generations of us watching this, and I think there's something to be said for packaging that kind of political messaging in that kind of way because it's not something that we normally see from that side of the political spectrum, but it engaged us, it got our hearts racing, it gave me some kind of joy of like seeing things framed in that way. So I can't help but think about kind of this sort of new entry into satire and political commentary uh that we haven't heard from in a few years.
SPEAKER_02What you're saying puts me in mind of something that I think is really important, another arousal emotion, if you will, and it isn't exactly an emotion, but I think it it achieves the same end is community. So I'm put in mind of something Stephen Colbert said, you know, a recent victim of all of this, about comedy. He said that one of the great things about comedy, one of the reasons he likes doing live comedy, is that when people laugh at something you have said, there is this moment of shared vision, shared community. We all think this thing is funny. So for one moment we're all in agreement, we're all in community, we're all expressing this joy. And I'm sure laughter kind of counts as this thing that, like, when you see something that makes you really laugh, you are likely to share it. So I think that that community, you know, for lack of a better word, is its own little thing that we're all looking for right now, and that the content becomes an excuse for and a content becomes a way to bond over. So if people are watching South Park and be recognizing it, yes, right, that reminds me of when I saw the South Park movie and Saddam Hussein was basically the same character. That's like a thing that I can lock into and that we're all sharing at the same time. Or on the other end, people who are going to Trump rallies, right, and they're feeling community. Like I think that is a legit thing people are feeling, a legit reason people vote for him in such large amounts, is because they feel for just a moment in community, and that is also an arousal emotion. And that community may be centered more on fear, but it's still community. And we're gonna seek that out too. And I remember there is a company that does um emotional analysis of voting, and one of the things they've found is that yes, fear will get people to vote, anger will get people to vote, but that's sort of like second tier, the top tier is hope. That's the thing that brings people out. So if you see lots of people voting for Trump, yes, part of that is anger and fear, but the majority of it is hope. Trump gave them hope, right? Biden gave them hope. Obama gave them hope. So whenever you see that kind of behavior in large numbers, you have to think about community, you have to think about hope. Not just who do they hate, because that's definitely part of it, but what do they hope's gonna happen? How does that give them the illusion or the reality of community? Like I think those are the kinds of biases, those are the kinds of emotions that are really at play with stuff like this.
DanYou know, I've been reading a lot about propaganda uh for this work, and one of the sort of ways of framing or thinking about propaganda is messaging to let people know they are not alone. Yes. I think it's sort of tying into this theme that we're talking about about sort of drawing people together. And so when an entity in power sort of puts out propaganda, they're trying to, I think, maybe sort of close the distance between whoever they are and these individuals who might think that they are alone, but helping people see they're part of something bigger, whatever that something bigger might be, creates that uh community, draws people in. And sometimes the propaganda needs to be false in order to make people feel less alone.
SPEAKER_02Let's talk about like the bigger the lie, the more people will believe it. The caveat there is that it has to be a lie people already want to believe.
DanYeah.
SPEAKER_02So you'll notice when Trump says outrageous shit, it isn't just totally random. Like when that um helicopter airplane crash happened early on in his administration. What did he blame? Did he blame space lasers? Did he blame aliens from outer space? Completely wacky things to blame. No, he blamed DEI because that's the thing that people want to believe is true. They want to believe that DEI is the villain. So let me just keep beating that drum. And they want to believe that they're not alone in believing that. Right. It's like it's not just me. Trump said it. Trump said that these people are unqualified and that me and my white privilege is actually the one who should be getting that job. That is a crucial component. And again, it comes back to confirmation bias, the thing that you believe that you want to see more evidence of. And it's interesting, I've never really explored this, but I think you bring up an interesting point. I feel like part of the comfort of confirmation bias isn't just this cognitive consistency, I think it is about loneliness. A lot of the time when they talk about whether or not someone wants to continue with a belief, like uh hate speech, what have you, there is this component of if I go against this, like if someone gets doubts about, hey, maybe gay people aren't evil, part of the hesitancy to follow through on that thought is, yeah, but I'm already in a community that kind of hates gay people, and if I start going against that, I'm gonna have to give up my community. And the price is way too high, right? It's like I don't really know any gay people. I know a lot of people who hate gay people though, so do I really want to lose all my friends? Right. Like that's the thing. Even if it's something you personally don't feel super strong about, it's like, eh, gay people, I could take it or leave it. I don't really care. But oh, if I want to still have friends, if I don't want to get shunned, if I don't want to get canceled by my community, I guess I gotta believe this thing.
RachelYeah, and leaving your community is a really dangerous thing to do. Like the animals inside of us know it is a much safer bet to walk in a group than to walk alone, you know, metaphorically. And so making that choice to leave a community because of one quote unquote minor difference is a huge ask to make of somebody.
SPEAKER_02And it's why you see a lot of language around hey, when these people start doubting, don't be all I told you so and shun them because now they have all the reason to just go right back where they came from. And while I sympathize with the impulse to shun and the satisfaction that comes from saying, hey, you voted for this guy, now you're screwed, haha. I get that. I absolutely get that. At the same time, I also acknowledge if you want to build up our ranks, you might gotta have to swallow that because they are still looking for community.
DanWhen we were first talking to you, you said that for the most part, cognitive biases are universal, but you've sort of observed or seen sort of seen research about this one exception that there's this uh individualism versus collectivism kind of thing. And as we talk about community, I'm sort of reminded of that conversation that we had. Can you tell us a little bit about, first help us understand what the differences are between those two things, and then maybe we can talk about that in the context of disinformation.
SPEAKER_02Yeah, so I did about a hundred or so episodes of the Cognitive Bias podcast, each one dealing with a different bias. And, you know, you notice trends, and one of them is no matter where you go in the world, most of these biases will happen. People will roll a die harder if they want a higher number, and softer if they want a lower number, no matter what country you're in, right? That's just the it's called illusion of controls. Like that's something you're gonna find wherever you go. However, social biases that have to do with your own identity. So for example, fundamental attribution error, which is the bias that comes in when you're like, I see somebody running a red light, and I think, oh, that person is probably very irresponsible. But if you run a red light, you're like, oh, I was late for work, right? You think about your circumstances when you judge yourself, but if it's somebody else, you assume it's something about them personally. You attribute that to their personality. That is something that is more prevalent in Western countries, uh, individualistic countries, where we do, in fact, believe that everything centers around the individual. However, if you go to countries that are more collectivist, so your Vietnam's, your Chinas, your Japans tend to be a little more collectivist, you will find that those biases don't show up quite as much. In fact, there's a really interesting experiment where you show someone from an individualistic culture an aquarium and you ask them later, what do you remember? And they'll say, like, oh, I remember the f this fish had this color and this fish had that, right? They're remembering the foreground, the things that were kind of moving around. You ask someone from a collectivist culture, they're more likely to also remember, oh, and there was this blue hue to the water, and there were these leaves in the background, and the, you know, they'll they'll they'll give you the whole context of the aquarium. And the idea is that, you know, if you live in a collectivist culture, you're kind of trained to see the whole and think about your relationship to the whole as opposed to just your own wants and needs. So I'm very curious, and this is research I haven't done or haven't looked into, but I suspect is true that part of like disinformation, or at least this American brand of disinformation, where it's kind of like opt-in propaganda, like which Plato's cave do you want to live in, is particular to individualistic cultures. Which is not to say that collectivist cultures don't have propaganda, they absolutely do, but I think it works a little different.
DanDo you have a go-to example for what you mean by opt-in propaganda?
SPEAKER_02So when we think about propaganda, we think about like, I don't know, Soviet Russia and those posters, right? And the little sickle in the hammer, it's like, you know, do this, be this way, and like this is the truth, and that there's only one source of truth. Or we think about 1984, like there's only one source of truth, and here it is, and everyone believes it because there's nothing else. Like you don't have anything else to go to, right? And that's how we traditionally think of propaganda as this hierarchical top-down there is one truth. What's interesting about modern propaganda is that there are a billion truths. How many websites are there? Okay, that's how many truths there are. How many social media accounts are there? Okay, that's how many truths there are. Again, the algorithm, because of the way we set up the web, once you've basically declared your preferences, the web is happy to just feed you that particular version of the world. You've basically opted into a version of the world. Are you a flat earther? Great. Here's all the content about flat earth. Are you uh Q? Okay, here's all the QAnon content you can handle. Do you believe in universal basic income? Okay, here's everything associated with it that you can believe, right? Whatever you believe is the truth, I can help you. I can give you an entire USSR's worth of propaganda around that. So instead of having one source of truth that everyone has to believe, it's more like, hey, you want to build a world, fake or real? Great, we've got enough content for you. So it's opting into a particular kind of propaganda, but it's still propaganda. It does not question itself. The null hypothesis, like, if you're wrong, what else might be true? It doesn't do that. We don't have an algorithm for that. We could. Right. But we don't. Instead, our algorithms are recommendation algorithms. They say, Oh, you like this? Okay, let me look for all the tags that also have that. Okay, here's this. That's what I mean when I say opt-in propaganda. It's as far as I can tell, kind of a new phenomenon just because we haven't had the technology to do it at scale like we can do now.
RachelI recently got super into Reddit. I'm so late. But it's been such a long time since I have been at the early stage of watching an algorithm try to figure me out. And man, I clicked on one thing about ghosts, and let me tell you what my Reddit thinks I'm into now. But watching this algorithm make these guesses about what world, what set of truths I want to see reflected at me has been wild. And I couldn't think of the last time I had knowingly watched that unfold because all these other services I've been part of for so long that it feels like I can't even remember what that might have been like before all these assumptions were being reflected back at me. Uh it's just been really clear how you were saying, you know, there's all these different potential truths, and the algorithm is going to reflect one or a set of those back at you based on your engagement, based on what you've clicked on, based on what your preferences are. And it's so hard to spot sometimes, even for those of us who have been doing this for so long. And so having that recent Reddit experience uh was really eye-opening.
SPEAKER_02Can you remember what the web was like pre-algorithm? Like just discovering stuff, having nothing, nothing handed to you. You just had to go and a friend tells you about uh Home Star Runner, you're like, okay, I'll check this out. What the hell is this? This is awesome. By the way, Home Star Runner is still unpersonalizable. You cannot log into Home Star Runner. You are gonna get the same Home Star as everybody. But that weird art gallery experience of the early web, it's just go around and like the closest you had to an algorithm was simply things like Slashdot, which was like, hey, here's some cool shit we have found on the web this week. There was a purity to that, right? And there were absolutely flaws, but there was a purity to that. Whereas now it's like going to a bookstore, and before you walk in the bookstore, they like say, What's your three favorite genres? And then wherever you walk in the bookstore, it's just like sci-fi horror and books about movies. That that that's what I probably pick. And you try to find other books, but it just the architecture keeps shifting. And if you really want to see a book about like US history, you have to like actively dig downstairs in the basement and find the one door that says US history.
DanOne of the things you said was in an individualistic culture, I tend towards like a fundamental attribution error. Like everyone else is at fault, or out groups are always kind of at fault, and I've got the websites to prove it, right? Because I've got this opt-in propaganda that tells me I'm right. Did I get that right?
SPEAKER_02Oh, yeah, absolutely. I mean, and again, that is operationalized confirmation bias.
DanI love that. That's a great phrase. So if we were to do this on the collectivism side, I'm almost hesitant to get us to speculate about how to make evil collectivists.
SPEAKER_02Oh, we it's it's already it's already been done. Oh yeah. If we get bring it back to like misinformation, like if you go to China and if you do a search for Tiananmen Square, you were just gonna see, hey, here's this beautiful landmark. This is like from decades ago. You would have people at like Beijing University, like the basically Harvard Times 11. Like, these are the smartest kids on the planet, and you ask them about Tiananmen Square, and they're like, What happened in Tiananmen Square? I'm like, what are you talking about? So, from omitting information, it's actually very powerful. But I was talking to someone who uh used to live in China, she was here now, and this was like a day or two after Trump won for the second time, and she was really heartbroken, and she was like, This is worse than China, and I was like, How? And she said, in China the government would put out a statement and you knew not to believe it because it's the government, right? There's this very hard line around what to be suspicious of and whatnot. Here it's a flood. Is it true? Is it not true? Who did it come from? Who knows? Is it even coming from a human being? Who knows, right? Right. It is a completely different information atmosphere to try to navigate. Like, in a way, the Chinese system is easier to navigate because it's like you're almost bought into the null hypothesis already. It's like, okay, the government said this. If that's false, let me find out what's probably true then.
DanAnd what's sad about where we are now is that there were plenty of places in the executive branch that were nonpartisan whose job it was was to just put out information about the country, the economy, the world to help us, right? It's everything from the National Weather Service to the Energy Information Administration, all of these were just like civil servants putting out information. And because of what I think has transpired over the last six months, even those things have become untrustworthy as well. It's a new place, I think, for me personally, and maybe for a lot of Americans, to sort of be able to distinguish okay, this part is the propaganda, this part is probably reliable, and now none of it is reliable.
SPEAKER_02And part of the problem is navigating this particular environment requires nuance. And Americans aren't, we don't do nuance. Come on, man. We do the Western myth, the cowboy kills the Indians, right? There's no nuance in that. And I think that in a weird way, Nixon's failure is absolutely the best thing that ever happened for the modern Republican Party, because Nixon's failure destroyed faith in government. Like it was already on its way out with the Pentagon Papers and all that, Vietnam War. But Nixon's very public crisis was like, okay, and I don't think they recognized it at the time, they were not very happy about it. Like Carter does not win in any other environment, right? Right. But once you get to Reagan saying government is not the solution to the problem and government is the problem, that is something you cannot sell, I believe. You cannot sell before Nixon.
RachelRight.
SPEAKER_02Before Nixon, that seems a little weird. Even Nixon was doing things like creating the EPA at the same time as creating the DEA. The idea of government being the solution still held water. After that, and people are like, whoa, even like those hippies got a point. I I don't I'm not sure we should trust the government, right? That weirdly enables Reagan to be like, okay, we can deregulate because you don't want the government messing with your business, you don't want the government messing with airplanes. That was like the beginning of the end. When I think about that loss of faith in the institution, and again, that loss of nuance around, okay, there are things that just structurally are partisan controlled in government, and things that are a hundred percent not even remotely partisan in government, that starts to get eroded and eroded to the point where it's like, oh no, all government is evil. Get rid of all of it.
DanSo we've been talking about sort of cognitive bias of humans, we've been talking about how technology is exploiting those cognitive biases, but maybe we can end on even a slightly more optimistic note. Have you been seen or experienced or even helped design any systems that help people avoid succumbing to their biases in a way that's harmful to them?
SPEAKER_02Oh, the problem is not, and I'll give you an example. The problem is not coming up with technologies to help curb bias. It's so easy. It's so easy. The problem is no one wants to fund it. Or the people who do want to fund it are not Mark Andreessen. Right. So Trisha Prabhu, when she was 14, she got into the Google Science Fair with a technology called Rethink. And basically what it does, it kind of detects with an algorithm, the text that something you typed and you're about to send out on social media could be hurtful. And it sort of pops up this window that's like, hey, that might be hurtful. Are you sure you want to send it? And like 97% of the people in our test group who were adolescents, by the way, not exactly great with impulse control, stopped and didn't send the hateful thing. Alright? So 14-year-olds can figure this out. Oh, and it only took two sentences to do it. Okay, so no, I don't think we need some kind of supercomputer to figure out how to use technology to curb bias. That's some pretty cheap technology. Why didn't every social media app suddenly take this super cheap thing or buy her out? Google acquires a company a week. Why didn't anyone sort of take her up on this? Because remember the business model. The business model is pro-hate speech, not anti-hate speech. If anything, I want to pop up that says, hey, that doesn't look like it's gonna hurt anybody. Are you sure you don't want to call someone a slur before you post that?
RachelI was told this section was gonna be more optimistic.
SPEAKER_02So the optimism is in this. The optimism is in we actually have the technology to cure cancer as it were. The problem is the people in charge of health. So if anything, that means you can now focus your efforts. It's not about, oh my god, how do we figure out how does it dude? We've got a whole library full of people who've been thinking about this and have ready-made. Honestly, it's the same with almost every social problem we have.
RachelChanging the incentives. You know what this makes me think about, Dan, is we spend a lot of our conversations talking about the technology aspect of our work, the literal like IA craft. Yes. And I don't know about y'all, but definitely more than half of my job, anywhere I've worked, is about working with people and changing behaviors and getting decision makers to change their framing, to think about things in a different way. Little of my work is actually the IA of the thing that we're working on.
SPEAKER_02We've all built websites before. It does not take six months to a year to build a website. It takes maybe, maybe three. I bet you could do it in one. It takes six months to a year to build a website with people.
unknownYes.
SPEAKER_02Waiting two, three weeks for legal to decide if one page is okay. When I work with big companies, I spend most of my time making slide decks that my stakeholder can use to convince their stakeholders to give them money. I spend most of my time on most web projects fundraising. That is what I am doing. So, yes, a hundred. 100%, it's always about the people. It is always about the people. But here, I'll give you a hopeful note to end on about like using a bias to fight a bias, right? This is about changing behavior as opposed to changing beliefs, right? So Stacey Abrams has to convince the Tea Party to vote against anti-environmental legislation. So if she doesn't walk in there and try to say, hey, I'm going to convince you that climate change is real, she knows that's a non-starter. She knows they don't believe it, right? So instead, she says, I'm going to convince you that property taxes are real. Because she knows that for the Tea Party, it's all about taxes. That's what keeps them up at night. So she comes in and says, hey, if this anti-environmental legislation comes through, the soil bed's going to get eroded and the value of this property is going to like tank. And she doesn't even call it anti-environment. She says, if this legislation goes through, she doesn't need to say the word climate, she does not say the word change, but she says the words property value. That's it. And they're like, oh, that's terrible. We're going to vote against that. Done. They walk into that meeting, not believing in climate change. They walk out of that meeting, not believing in climate change. But she doesn't need them to believe in climate change. She needs them to vote the way she wants them to vote. So yes, it is about people. And for short-term gains like that, which are extremely important, especially now, that's the thing. These are very achievable things.
RachelSounds like she's doing some framing.
DanExactly.
SPEAKER_02Framing device.
DanDavid, we're running out of time. This has been so awesome. But I do want to ask you about your movie. Oh, sure. So give us the pitch.
SPEAKER_02Yeah, so underneath Washington Square Park in Philadelphia are buried the bodies of hundreds of enslaved people. This is true. What if one night they all came back from the dead as zombies, but they only ate white people? Uh the movie is called White Meat, and we just finished shooting a short film version of it called White Meat Appetizer. In fact, after this call, I'm gonna talk to my editor about the second draft of the Rough Cut. Um, super excited about it. Hope to have it out uh by October, and I will let all of you know about the progress. But yeah, come to whitemeatmovie.com to learn more and see how you can help.
RachelAmazing.
DanI cannot wait. David, this has been fantastic. Thank you so much for taking the time to chat with us.
SPEAKER_02Thank you.
DanRachel, I forgot to mention that David Dylan Thomas's keynote at the IA conference received, I think, the first ever standing ovation at the IA conference.
RachelYeah, someone said something about that, and I was surprised, and I'm bummed I didn't get to stick around to the end. The part I walked out on that made me really sad that I had to leave was he was talking about like this like weird thing we're in with personal brand and how treating yourself like a product is like really icky. Yes. And I shed a tear and then walked out of the building.
DanLet's talk about our takeaways from this conversation. And did you have any ideas for a lens?
RachelYeah, I've got a long list here, and the one I want to start with is exploitation. Or maybe it is manipulation, manipulative. And what I'm thinking about is for a hot minute there, we were talking about being like the optimistic side of this situation and how hope and joy, just like anger and fear, result in a physiology increase, right? And we were talking about how a lot of these systems exploit kind of the anger and fear physiology effect. And Dave made the comment that like you can do the same thing with kittens and rainbows. And it made me think about this thing that we talk about a lot in design, like designing for delight. And it's a thing that I've always found to be a little icky, or a thing that maybe I don't care about, maybe I'm not delightful. Who knows? But I realized today what was under that for me, and it's that designing to elicit a particular powerful emotion, or designing to take advantage of a powerful emotion is exploitative. Maybe there's a more neutral word for that that I haven't thought of, but this lens asks us to think about what human emotion is the system trying to exploit, if any, and are you okay with that?
DanI like where you're going. Let me ask you a little bit about this. Is the intent for us to think about the content in this information system that we're creating and how the content itself creates the word that David used was arousal, right? This kind of physiological arousal. Or, and it could be all of these things, I'm not asking you to choose, but the perspective to kind of compel us to think about the different aspects of the ecosystem. There's also the algorithm which makes choices about the kinds of content to promote as well. The thought that I'd had revisiting the stuff we talked about with David was how does the system, this information system, know or understand the level of arousal associated with a piece of content?
RachelYes.
DanAnd I feel like that is a reasonable space for we information architects to play in. When we're talking about metadata, the emotional component of content is not really something that I've ever tagged, right? But I think is kind of important in this day and age.
RachelYes. And this is making me think about one of my IA principles that I hold dear is that the IA shouldn't lie. And this is related. Let me get there. So you're talking about like how we kind of assess the arousal potential of a piece of content. I gotta find a different word. That's making me laugh too hard. But I'm thinking about how you consider the potential impact of a piece of metadata to maybe give someone hope or make someone mad. And then the follow-through like, is that metadata, is that thing, that headline, that price, that description, whatever, is it accurate? And so that hope is well placed, or is it a bait and switch? That's where my head went when I'm thinking about the potential for a tiny little piece of content, a word, a price, a whatever, to exploit someone's potential hope or potential rage about that fact.
DanI mean, then the truth is that we can't ever really know what's going to get someone, right?
RachelWell, let me give you an example I'm thinking of.
DanOkay, good.
RachelSo in ticket pricing right now, I believe there's recent legislation around this that I haven't been following very closely. But if you go to a ticket master-like service provider, there has been some back and forth on if the ticket price they show you should be the base ticket price without all the fees, or if they should lay out all of the fees as part of the ticket price. So, like which ticket price are you really seeing? And this is one where it's probably less divisive for a designer to sit there and think like which one feels more like the bait and switch, right? That's pretty straightforward, I think. But that idea of like, am I giving you hope that that ticket really is$15, even though I know by the time you're done it's$27.50. Price is an obvious one, but that can happen with timestamps. If you're looking for the most recent information on something that's very important to you, and the timestamp is the last time it was reviewed and cited as like, yes, this is accurate, or if the timestamp is just the last time someone like fixed a typo, but maybe didn't review the accuracy, that's a really minor, like more subtle distinction between like what is the potential for this piece of metadata to give someone hope or make someone mad. But yeah, I think thinking about that potential exploitation is really what this lens is about. So I think you can apply it to all sorts of things and it leads you to other lenses. Yes. Like it might lead you to an accuracy lens or lead you to a transparency lens or something like that.
DanOr, and I think a lot of our lenses come back to maybe this is more of an underlying theme of framing, because it may not be the content itself that's garnering this response, but how it's framed is making someone uh upset about something.
RachelHow about you, Dan?
DanYou sort of took my butt.
RachelSorry.
DanThat's fine. This was a really good interview because it was sort of meta, right? We've been trying to talk to people about the information spaces that they play in and to help us understand sort of the dynamics of those information spaces, but we knew we wanted to talk to Dave from the beginning. And looking at this topic through the idea of cognitive biases has really kind of opened my eyes to still more ways of understanding the problem space that we're in. During the conversation, I was thinking a lot about the role of the participant in this, right, as they bring the cognitive biases. It's occurring to me that there is potentially a need to understand how information spaces have been manipulated, and to that extent, understanding the kind of manipulation that happens and what the intent is of that manipulation, which brings me back to belonging, right? This idea that sometimes the intent of disinformation is to make people feel like they belong to some kind of in-group so that they can dehumanize an out-group so they can make bad decisions potentially. And I feel like it may be kind of a long road to get from how do we understand belonging to how do I design my information spaces? But I do think we can ask ourselves: what role does belonging play in the space that I'm designing? I mean, just simply sort of asking someone to sign up or commit a certain amount of personal information or make connections to other people, right? All of these things start to draw people into some kind of in-group, even if we never really thought about it in that way before. I've asked myself, what's the value to the user of logging in? But I've never asked myself, what's the value to the user of getting them to make this kind of commitment, to build this sense of belonging in this information space that we're designing.
RachelYou know, it also makes me think of relevant exercises, which I believe you have taught, around looking at language and labels and looking for jargon or looking for language that is expressly communicating an in-group. We think of this like when we design navigation, an example I really love. You can look at REI, and they've got all these like outdoor activities, right? And the navigation really focuses on these activities. Alternatively, you can go to Huckberry, and one of the categories is everyday carry, and one of the categories is the ruck shop. And both of those things, you kind of need to know what everyday carry and rucking are, which are military in route. Everyday carry is the stuff you always have on you for everyday emergencies, and rucking is like walking with heavy weights. Yes. Like a heavy bag. And both of those things are jargon that you kind of got to know or go look up if you don't know. And so this is an example where the stuff being offered by both of these retailers is approximately the same. One retailer is using less exclusive language, one retailer is using more exclusive language to express to a particular group that they belong here. Yes. And your lens of belonging, I think, takes that out of a tactical look at the language and asks you to even go even higher up that mountain, like think more broadly about what signals, structures, invitations, frames you are employing to ask someone to belong to the space you are building.
DanYeah. Like a lot of my work these days, I'm sort of trying to apply content-led information architecture concepts to much more deeper and fundamental structures that we have to design in much more complex information spaces. And for better or for worse, the kind of work that we're doing in product IA asks us to treat people, users, as information objects, right? Sort of in a matrix-like way, sort of look at someone and just see all the data behind them. Yeah. Not only that, because we're information architects, we think about how that object is connected to other objects. And there is an emotional component to that connection. When I am on a social media site or LinkedIn, right, and I see that I have a connection. I mean, it's not anything real, it may be a reflection of what's happening in real life. But when I see it represented on there, that takes on another meaning. It creates this kind of sense of belonging to a community. And I have to admit, that sense of belonging sets some expectations for me.
RachelYeah.
DanAnd I'll post something and people won't see it, and I'll be like, but I'm trying to talk to my community, I'm trying to talk to my people, I'm trying to talk to the other people to whom I belong. Why are they not seeing it? But I also have a group chat with my family, and I'll tell them that dinner's ready and they don't see that either. So maybe it's just me.
RachelThis is really making me think this is not an extra lens, but just kind of a theme that came out of talking to Dave was this idea of collectivism and like humans as a group and not just a bunch of individuals battling misinformation in the world. The segue here is I think a lot of the emotions that we experience in social media, like LinkedIn, are the result of us wanting to be part of a group of people. We want to belong. It's human nature. You're safer in a group, you're safer when you have a crew. And this just really has me thinking about something we talked about, I think, even in our pilot, what to do about misinformation and how some of the ideas and a lot of the ideas fall to individuals to implement. Like, oh, we need to teach individuals how to be more information literate. You need to learn how to see through a bad headline. You need to learn how to tell when an AI image is AI generated, versus this idea of like individuals exist in the context of groups to which we belong or want to belong. These groups exist in the context of systems in which information is exchanged. And these systems are where the machinery of misinformation is happening. And so I keep coming back to this desire to like hope and explore what needs to change in the system in order to have an impact on our current misinformation landscape because groups and systems are always going to be more powerful than individuals.
DanYou've been listening to Unchecked with Dan Brown and Rachel Price. We were so glad you could join us. I hope you join us again. In the meantime, if you got some ideas for folks we can talk to, we want to hear about it. And if you find yourself using any of these lenses in your work, we also want to hear about that. Please do drop us a line, and we'll see you at the next episode.
SPEAKER_00Unchecked is a production of Curious Squid. Curious Squid helps organizations like yours untangle complex information architecture and user experience challenges. Visit us at curious squid dot com.