Unchecked: The architecture of disinformation
Misinformation and disinformation thrive in today’s technology landscape, and arguably present the greatest threat to modern society. Information architecture – the practice of designing and managing digital spaces – has an opportunity to intervene. This podcast looks at disinformation from an information architecture perspective, and considers ways to expand the practice of IA to address this new reality.
•••
What is Information Architecture? Information architecture is the practice of designing virtual structures – the shape and form of online spaces and digital products. When you click on a navigation menu or follow the steps in a process, you're experiencing the information architecture of a web site or digital product.
•••
What is disinformation? Understanding disinformation is the purpose of this podcast. We are trying to figure out exactly what it is and what it means. If information architecture is the practice of designing virtual spaces, then disinformation is something that can occupy that space to disrupt the user's experience. Alternatively, it is a way of manipulating the space (like flooding it with irrelevant facts) to achieve an end unrelated to the space's original intention.
Unchecked: The architecture of disinformation
Episode 12: Disinformation and community moderation, with Karen McGrane
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Synopsis
Karen McGrane joins Rachel and Dan to explain Reddit from the perspective of a moderator. Reddit is the largest message board on the internet, with thousands of “sub-Reddits” – individual communities based on a topic. Karen moderates the community dedicated to UX Design, which gets hundreds of new posts every week. Rachel and Dan then explore two Lenses: Swarm and Curation Escape.
Stories
"About this Account" on X/Twitter
Big Tobacco
- Tobacco industry playbook (Wikipedia)
- Disinformation playbook (Union of Concerned Scientists)
Interview with Karen McGrane
Lenses
Curation Escape
So much of our experience online is curated by algorithms. A set of rules – not chosen by you – governs what bubbles up into your feed. This set of rules is at the heart of most modern information systems, and can be responsible for perpetuating misinformation. They pose a disinformation risk by people who manipulate the algorithm.
- How does the system allow users to escape the curation?
- What role does algorithmic curation play in the system’s experience?
- How does the system allow users to tailor the curation algorithm?
Swarm
Participants in online spaces can exhibit swarming behavior, gathering and moving as if one. Swarming groups end up performing a variety of functions – both desirable and undesirable – in an online information space. They can enforce social norms, or alienate other participants. Likewise, they can squash misinformation, or cause it to perpetuate.
- How does the system react to swarm behavior?
- How does the system benefit from swarm behavior?
- How might swarming cause harm?
Come see us at Information Architecture Conference (IAC26)
- Register for IAC
- Use discount code unchecked for $50 off base admission
_____________________________________________________
Personnel
- Dan Brown, Host
- Rachel Price, Host
- Emily Duncan, Editor
Music
- Turtle Up Fool, by Elliot
_____________________________________________________
Unchecked is a production of Curious Squid
Curious Squid is a digital design consulting firm specializing in information architecture, user experience, and product design
Thanks so much for listening to Unchecked. I did want to let you know that in April, Rachel and I will be at the IA conference that's happening in Philadelphia from April fourteenth through the eighteenth. Rachel's gonna give a talk on navigating messy projects. I'm talking about some kind of IA nerdery. But anyway, I'm really keen to meet listeners of this podcast. I'm gonna go out on a limb and say Rachel is also keen to meet listeners of this podcast. Anyway, we hope to see you there. To sweeten the deal, you can get fifty dollars off your conference admission using the code UNCECT. You can register for the conference at the IAconference.com, or just check the show notes for a link. I can't wait to see you there.
SarahYou're listening to Unchecked, the podcast about the architecture of disinformation with Dan Brown and Rachel Price.
RachelDan, where's your battery at?
DanI mean, it's Friday afternoon before a huge snowstorm is about to hit the east coast of the United States. All I want to do is go to bed for 36 hours.
RachelI think you should. What's stopping you?
DanI'm responsible for dinner. I gotta cook dinner for everybody.
RachelLeave a jar of peanut butter on the counter.
DanIt would not be the first time I've done that.
RachelProblem solved. Next. I too wish I could just uh be unconscious for the next 36 hours if my child is sick.
DanHow is she? I forgot to ask, I'm sorry.
RachelYou know, she's resilient, but she just she has a fever, and so I have to be a responsible parent and not like drag her around town to all the fun things we were gonna do. Which honest blessing and a curse, you know. As long as I'm okay rebuilding this Lego set like 18 times over the weekend, if I can just like accept that as a form of meditation, then the weekend's gonna be great because that's all she wants to do.
DanJust work on the one Lego set.
RachelOver and over again. Yeah, we build a donut truck and then we take it apart. And then we build a donut truck and then we take it apart. I don't like Legos, so I'm really like common.
DanHow are we even friends? I wondered this a lot. We have a whole room in my house dedicated to Lego.
RachelI am not surprised, like, at all.
DanThe apples did not fall too far from the tree in this regard.
RachelI should say uh my partner early's dad loves Legos. He's an architect. So she does have a Lego ally in the house. She's got a sponsor.
DanI feel somewhat relieved. My younger son has decided that he is going to suspend his uh dance training. And so this weekend is his final with an ass risk recital ever.
RachelThis weekend? Really?
DanThis coming.
RachelIs it actually happening though? Because I thought that's what I'm saying.
DanThat was the that's the big question. What am I doing this weekend? I'm contemplating whether we are going to a recital or not.
RachelIs this the season cliffhanger?
DanEnd of season cliffhanger in the Holden Brown household. It uh doesn't surprise me at all that his potentially last recital ever might be cancelled.
RachelIt just feels really on brand for the times, quite frankly.
DanIt is totally on brand.
RachelWell, that sounds thrilling. Do you want to hear something else thrilling? What? Well, it's not really that thrilling, but it's my news piece for today. I have this news piece that is related to what we talk about. Do I actually think it's gonna make a difference? Probably not. I'm just gonna say that right now.
DanOkay.
RachelUh so we have this relatively real-time example of information structures getting more transparent. I actually saw this. A misinformation expert, for lack of a better word, uh Renee DeRestra posted this on LinkedIn. This is from November, so it's a couple months old now. But at the end of November, X began rolling out uh the about this account feature, which includes a geolocation. And so this was a big change for X, which is really known for its transparency. She said sarcastically. That they added this new feature to user profiles that displays information about the account, which includes where the account is based, how many times the account has changed its username, the account's original join date, and how the user downloaded the X app. So what this adds up to is like in theory, if you have this account that is posting information, you can go look at the account information and see, oh, it's an Ivanka Trump account that's actually based out of India or that was downloaded through like a Nigerian app service. Or you can see they've changed their username 15 times in the last six months. Right. There's signals there. Like you can only see it if you are a logged in user of X, which many of us are that anymore. But the theory is that there's more information sent actually in user profiles on X to give you a sense of context. More context. Yeah.
DanYeah. Which we know to be helpful for people to discern whether something is reliable information.
RachelYeah. Or not. So, you know, I don't enjoy bringing news that like shines a positive light on X or Elon Musk. I frankly think there's a lot of issues there, but this is relevant to the work we talk about. Yeah. And there's some bugs with this, and uh you can imagine how that context can be misinterpreted by a human. Yeah. You could make uh judgments like, oh, this person downloaded this app from a Nigerian marketplace, like a scam. You don't actually know that. It could be a Nigerian person. And so I think there's obviously with any information sent, humans can make judgments and misinterpret things, but it's a step in a more transparent direction, more context about accounts. I don't think it'll stop misinformation, but it does give people maybe a fighting chance to at least interrogate the source of information on Twitter a little bit more. Sorry, X. I almost made it through the whole thing without saying Twitter.
DanThat's fascinating. Thank you for bringing that up.
RachelYou're so welcome.
DanI felt it was time to kind of talk about big tobacco. I think I've maybe referenced it a couple times, or some of our guests have referenced it, but it's clear that in the 50s, Big Tobacco was instrumental in transforming what we had learned about propaganda and advertising into something even more sinister. If you read anything about this time period, a number of scientific studies had come out that connected smoking to lung cancer. Before it was difficult to do these studies because smoking was so pervasive, they couldn't get good data on it. And it was easy for them to point to other things like industrialization or other things to say, well, it must have been these other things that did it. So in 1950 there was the study in the UK, and then later in 1950, there was another study in the US, and then in 1952 they did a study, I think, with yeah, with mice, where they applied tobacco tar to mouse skin. Yeah, and it was clearly carcinogenic. Anyway, data emerge about this connection and RJR, which was a a major tobacco company, there's a line from an internal memo. Whenever you read about this, this line comes up. Doubt is our product since it is the best means of competing with the quote body of fact that exists in the mind of the general public.
RachelWow.
DanSo they went from being purveyors of tobacco to purveyors of doubt. And we see this constantly now. Pick any topic, some of the folks we've spoken to about climate or about reproductive rights or about nutrition even. Any opportunity the opposition has to sow doubt, they will take it. The Union of Concerned Scientists got together and they kind of put together this, I think they call it the disinformation playbook, and it lists out a bunch of these strategies that you and I have been looking at over the last several episodes. Things like attacking experts or trying to sway policy or insisting that the industry can self-regulate, right? All of these things are just part of this larger playbook of disinformation. I just wanted to highlight that some of this disinformation stuff that we're seeing today has been around for a long time, and that tobacco, I think, was one of the areas in which this started. If we're talking about lenses, then I think one of the lenses to use, maybe if you're more of a consumer of information, is to ask yourself why is someone trying to cast doubt on consensus or status quo or conventional wisdom? What is ultimately their motivation? How do they benefit from challenging any of those things?
RachelWe've talked previously about uh wedges and like the strategy of getting a wedge in. And I feel like doubt is one of those very powerful wedges. Right. Doubt and fear feel like they're on a spectrum with each other. And I was listening to a different podcast the other day called The Devil You Know, which you shall go check out if you haven't listened to it yet. But she referenced this commercial I'd never heard of Reagan's Bear in the Woods commercial, his political ad campaign. So the Reagan Bear commercial is this apparently very significant turn in political culture and like political ads. And it really just vaguely suggests this potential threat, which is a bear in the woods, and vaguely suggests that this bear is dangerous and strong, and that you should be as strong as the bear, and then vaguely suggests that Reagan will make you as strong as the bear, and therefore you need to vote for Reagan. Doesn't say any of this out loud. Right. Go watch the commercial, it's on YouTube. Your story of like planting seeds of doubt and like using this as a wedge really reminded me of kind of what this commercial seems like it was trying to do, and then your lens of like asking yourself, like, who's benefiting from me being doubtful about this thing, or who's benefiting from me being afraid of this thing is a really powerful lens.
DanRight. What's frustrating is that you should genuinely be fearful of the effects of tobacco and eventually what we learned is secondhand smoke as well. But they're turning that around, right? They're sort of turning that on its head and that you should be afraid of whatever it is, big government and regulation, or you should be afraid of the scientific establishment, right? These sort of boogeyman that they've created to create fear of those things rather than the actual genuine scientifically proven threat.
RachelYeah.
DanI'm really excited to talk to Karen and I'll tell you why. We have dealt with some wicked heavy topics in the last few episodes. And Karen McGrain is coming to us to talk about UX Reddit. The amusement park of the internet, arguably, the county fair of the internet Reddit. We once talked to her for a while.
RachelAnd now's our chance. Now's our chance. Let's go. Today we've got Karen McGrain with us. Karen advises customers on digital strategy with a focus on content strategy, operations, and management. In her current role as senior director, customer experience at Contentful, she works to connect product, marketing, sales, and customer support teams. Her two books, Going Responsive and Content Strategy for Mobile, were published by A Book Apart. Karen, thanks for joining us.
KarenThank you so much for having me. It's great to get to spend some time with you both.
RachelThere are lots of reasons. We just always want to speak with you. But the reason that we invited you here is because of your relationship with Reddit.
KarenIt is true.
RachelSo for our listeners uh who don't know, let's start. What is Reddit and how are you involved?
KarenSo Reddit is, I would call it a long-form user-generated content platform. If you're an old head like me, uh you might remember Usenet or Dig was another woman, Metafilter, maybe. And my involvement with it is that I am a moderator of what I believe maybe the largest forum of user experience professionals on the web. So I'm on RUX Design. We have more than 200,000 sub-members and we get about 100,000 visitors a week.
RachelSo small crowd.
KarenIt's a lot of people. It's it's a global community, and it's people talking about their jobs, people talking about design problems, people venting about their coworkers. It's sort of an I describe it actually as the anti-Linkedin.
RachelOh, I love that.
KarenBecause one of the defining features of Reddit is that it's mostly anonymous. I am not anonymous, and there's there's plenty of other people in the forum that are not, but you can sign up with a username that is not attached to you at all. In fact, one of my co-moderators, UXNet, I have no idea who she is. I I do not know her name.
RachelOh wow.
KarenYeah. She's compl she wants to be completely anonymous, and that is how we do things.
RachelIt's me. It's not me. Yeah. It's different. Maybe it's Dan. This is a conspiracy theory.
KarenSo it's very different from LinkedIn because you you can say things in an anonymous forum. You can ask questions in an anonymous forum that you would not want attached to your LinkedIn profile at all. And so I think it gets a lot more real and a lot more honest in some ways. But the tone, I guess I would say, people have to maintain a level of professionalism on LinkedIn because their boss is watching. And that really doesn't exist on Reddit at all.
DanHow does a form about UX get unprofessional?
KarenOh gosh. There's a lot of venting about the job market. There's a lot of venting about, as people who have been in the field for a long time have seen, there's a massive influx of people from boot camps or people who have been sold on the idea by influencers that UX is a fun way to get a remote job that pays six figures. And in fact that is 100% misinformation. In this day and age, the job market is very tight. And so there's a lot of frustration, I think, that people want to express. And we as moderators really strive to balance that. It's a tough line to walk. I really do believe that if we're going to be the anti-Linkedin, there does need to be a space for people to share the negatives, but it also can tip over easily into too much negative venting. So we try to manage that.
RachelOkay, talk to me more about the management of that. Like, what do you do as a moderator? Walk us through a day in the life.
KarenThere's, I guess, three concepts that I should explain. So there is what I would call the sub itself. So you can go to Reddit slash R slash UX design, and you can see a feed of every post that's been made to the sub. There's various sorting algorithms behind that. So you can sort by the top posts of the day. You can sort by best, which looks at both posts and comments. You can sort by new, which will just give you a chronological feed. Then there's the concept of the main feed. There's an overall feed that each user gets where each of the subs that they subscribe to will have posts show up. And there's a relatively complex algorithm behind that to determine which posts you're gonna see. It's gonna be based on what you've engaged with, you know, what posts are getting a lot of traction, maybe which communities you haven't engaged with recently, such that they want to surface them for you. I don't really even know what happens with all of that. And then there's also this concept of stickied posts. And the stickied posts are maybe one of the more controversial ways that we manage on the sub. What most subs do, or what a lot of subs do, is they take common topics that get asked a lot, like very repetitive topics, and they try to redirect them to the stickies. And what that does is it keeps them out of the main feed so that people aren't seeing the same repetitive questions over and over again. But it also keeps them out of the overall Reddit feed. One of the biggest reasons why that's a benefit is that a lot of the very commonly asked questions or a lot of, like in our case, the venting posts where people are complaining about the job market, those get a lot of engagement. And so those will wind up showing up higher in the main Reddit feed. So the experience for somebody, if we don't redirect those posts, is that they'll wind up seeing the complaining posts more frequently than they will see the posts where people are asking about actual design problems or where people are asking, how do I do a better job at this particular problem? So we try to manage that and it's a tough call. I I feel bad about it sometimes because I recognize that people's personal job posts are getting less attention because we shove them off to the sticky, but at the same time, I think it actually creates a healthier community overall.
RachelI don't think I had really realized before speaking with you that moderating is more than asking people not to be jerks uh in their responses. Like I I guess I knew it was more than that, but you're doing a lot of information architecture, like managing the shape of the subreddit and like work with the algorithm in a more productive way, which I hadn't really realized.
KarenUh the UX person in me can't say I'm thrilled with the moderation tools because there's so much I would fix. But the tools they give moderators are actually pretty good. And you should see the rules that we have. There's a concept called removal reasons, which is a way that we can alert people to why their post has been taken down. And we use those removal reasons extensively. In fact, one of the things that we do is we have a list of topics that get asked very frequently, and we maintain links to all of the previous times the question has been asked. With the removal reasons, we can remove the post, drop the removal reason, and then reapprove the post so that the moderator note that says, here's all the times this question has been asked before, shows up right at the top. People have described that as a little passive aggressive, which it might be. But it's like we're doing the housekeeping to try to gather up, like, hey, a lot of people have asked in the past about what are resources for B2B SaaS or enterprise UX design? And rather than you know, having this question have to be asked from scratch every single time, why don't you go read the other 15 posts where people have already answered this question?
RachelYeah, you're doing librarian work.
KarenIt is librarian work.
RachelAmazing. Obviously, this is a podcast about misinformation, so I'm gonna take it there. Where does misinformation rank in your list of responsibilities as a moderator?
KarenI mean, I would say pretty high, just in the sense of like we don't want people getting bad information about the job market scams is a big one. Like I said, we try to redirect most questions about people's individual job search, but the posts that we do tend to leave up are about scams. Same thing with what I would describe as abusive requests for pre-work for interviews, like people asking for way too much work, like please redesign this whole flow in order for us to interview you. There's a lot of sense that startups especially might be using that as a way to get free work out of people. And we try to at least keep those up so that people are aware that that happens. There's AI-generated bot farming accounts. We try to keep an eye on that. It's not always easy to tell, but uh I feel like the community is pretty good at picking up on that now.
RachelTell me more about that. Because like your one person or your team of moderators who can be on the lookout for misinformation. I'm really curious what you observe, like the pack response to perceived misinformation entering the field.
KarenIt's funny. I think people have really learned how to read. And when they see a post that feels like it's AI generated, they call it out. Reddit has actually very strong m moderation filters themselves that the mods don't control, but like Reddit admin as a whole controls. And they're on the lookout for bot generated traffic. They're on the lookout for accounts that are really just spamming marketing information. So either between the community or between the auto mod from Reddit, a lot of those things do get removed. One of the things that I'm really sensitive to though is sometimes what reads as AI is actually a person who's not a native speaker of English using AI to improve their grammar or improve the English that they're posting. And so we try to keep an eye out for that. And sometimes the community will flag something and say, this seems like AI, and the person will add a note to the bottom of their post, like, yes, I did use AI to write this. It's because I am not an English speaker.
RachelJust because something was written by AI doesn't mean it's misinformation. But what are those posts about? Like, what's the point?
KarenUsually it's somebody phishing like they have a product that they're trying to hawk. They're smart enough not to just directly try to sell the product. So they'll ask a question that sounds like, I want to know more about this. But if you go and look at their profile, what they're doing is trying to push a product. And Reddit is actually amazing about flagging that kind of stuff. Often, even before I have a chance to see the post, like Reddit will either have taken it down or actually have suspended the account because they can tell that it's spam.
RachelOh, interesting. You mentioned some removal principles or removal guidelines. What are the principles that guide you as you're kind of doing your job of manipulating this information space?
KarenI mean, it is a UX sub with UX behind it. And the idea is that we have an audience that we're aimed at. Like there's an intended audience for the sub, and it is people who are experienced working in UX. So I don't like to put years of experience behind it because I don't think that that's entirely relevant. But if I was forced to explain what we're looking for, it's like you should have like at least two to four years working in the field. You should probably be on your second UX job. I don't mean to say you can't be brand new to the field, but I mean like that's not who we're aimed at. We're aimed at people who have already been doing the job and are coming because they have more in-depth, more meaningful questions to ask about the job than somebody who is just starting out. That doesn't mean that we don't allow more junior people to participate and contribute, but we really try to avoid very basic questions like how do I break into the field? How do I do this really simple thing? We're asking you to show up with good questions that your peers want to answer, not basic questions that you could get through asking anybody else.
RachelSo I imagine when you're applying these principles and having to make decisions, everyone just accepts all of your decisions all the time without complaint, right?
KarenEveryone is always so friendly and kind, and no one has ever given me any death threats in the mod mail. No, I mean it's Reddit. Like sometimes everything is just so calm, and the sub is just running along very pleasantly, and then all of a sudden it's a full moon or something, and we just start getting bonkers mod mail from people complaining that their post was removed.
RachelLike what?
KarenI I mean, literally, like you all are a bunch of power mad demons who live in your parents' basement and you're surrounded by rats and pizza boxes and you pee in bottles. Like it's that level. Wow. Over the top. I feel like we are so open and respectful. And if people come to us like, hey, you know, I'm sorry this got removed. Can you explain why? Or, you know, did I overstep here or something? We'll respond pretty quickly and very respectfully. But people who are jerks, it's just pretty much an instant ban. I'm careful about banning. We've had previous moderators that wielded the banhammer a little bit too forcefully. I use it very carefully, but being mean to us, you're done. We're not here. We're not gonna tolerate it. This is my volunteer job. Like, I don't have time for that.
RachelHow much of your time does it take?
KarenYou know, not very much, honestly. So I would say we probably get 700 posts a week. About 350 of them get removed or redirected.
RachelOh wow. Wow.
KarenYeah, it's a pretty high percentage. And so the ones that get through, it's like 350 posts get through, and then there's maybe 9,000 comments. I definitely read every post. I don't read every comment.
RachelWow.
KarenI really rely on the community to flag or report comments that are violating something. But usually if somebody reports a comment, it's because someone is being openly hostile, racist, misogynist. It's veering over into like, whoo, you need a timeout.
Rachel300 I'm reeling. I I couldn't read 350 Reddit posts in a week, but that's why I'm not the volunteer moderator. You are.
DanWhat drew you to it?
KarenYou know, I was on Reddit and uh I was participating, and one of the mods asked if I would become a moderator, and uh I was like, yeah, sure, why not? And you know, I think four or five years later, here I am, still doing it.
DanHere you are.
RachelSo I want to shift gears a little bit. When we talk about online spaces, we inevitably come back to the algorithm. I did not realize how much shaping of the subreddit you were doing, kind of in response to how the algorithm can clutter the feed and all this stuff. So talk to us a little bit about the role the algorithm plays in Reddit and your interpretation of how that serves or doesn't serve the community members.
KarenIt's a great question because I was an old Twitter user and was very attached to the idea of the chronological feed. And Reddit is very much not a chronological feed. The algorithm really defines what people see. So for me as a moderator, I will go to RUX design and be looking at it in chronological order. So I sort it by new, and so I'm always just seeing the newest posts that show up because I want to see what has been posted so I can keep an eye on it. That's not how any other sub-member experiences it. They're experiencing it in their overall Reddit feed, interspersed with all of the other sub-posts that they're subscribed to. Which posts show up in that feed is determined by a relatively complex algorithm that I actually don't understand. We as mods have to be sensitive to the fact the algorithm is going to prioritize the posts that get the most engagement. And that includes upvotes and downvotes. So Reddit has a, I think, relatively unique system where people can thumbs up or thumbs down a post, and then the amount of comments that the post gets, and you can also like upvote and downvote the comments.
RachelIf a post gets downvoted, does that mean the algorithm actually thinks it's less important, or does that still count as engagement?
KarenNo, I think it will definitely not prioritize posts that have been heavily downvoted. Yeah.
RachelYeah, that is unique.
KarenYeah. But I think they have a way of identifying what is considered controversial. So if it's getting a lot of upvotes and a lot of downvotes, if there's a lot of active comments, then that will probably get prioritized. I guess I should also say the feed that you will see of all the subs that you're subscribed to, but then there's an overall feed. There's just like everything on Reddit feed. It's called popular. So our sub is not big enough to ever wind up reaching what might be called the home page. It could be maybe some someday, someday. But there are subs where a post will reach the homepage of Reddit. What they say is it's it's reached all or it's reached popular. And what that means is you'll get an influx of people who are not members of the sub looking at it, maybe commenting on it. And that's an entirely different moderation experience. I'm actually grateful that I don't experience that. So I guess I will say there's this thing going on on Reddit right now. There's a sub called Kitchen Confidential, which is the best sub on Reddit. It's for back of house restaurant workers, and these people are the most hilarious, foul-mouthed, drunk, ride or die people you will ever meet in your life. So there's a guy who, for the past two months, has been cutting a cup of chives every day.
RachelI'm not even on the internet and I know about this guy.
KarenYeah, I mean, it has it has breached containment. There was a Washington Post article about it today. It fits the front page every day. Like the guy's chives and everybody commenting on the chives, and it's wild.
RachelThere is a chive tag on our kitchen confidential. Yes. Okay, I'm gonna I'm gonna close my browser now. Do you think the way that these algorithms are set up serve the community?
KarenYou know, I do. You know, I'm a Reddit user as well. Like I'm probably subscribed to a hundred different subs, and the feed gives you a nice mix. It pays attention to what you look at. The subs that I like to engage with, it those will show up, but it will also give me things that I haven't seen in a while. There's a setting where you can have it recommend other things that you might want to see. And if you're new to Reddit, it's a pretty good way to sign up for a few things that you might want to look at and then have it tell you, oh, if you like the sub, you might like this sub.
RachelFor me, it was a way for Reddit to think that I was really interested in garage door repair.
KarenIt definitely will do the Amazon thing where it's like, you bought a toilet seat, you must want to buy toilet seats now. You have a new toilet seat bag hobby.
RachelThe only time I really enjoy algorithms is in the honeymoon period, the first couple weeks where I get to see who they think I am and just giggle about all the delightful and accuracies and also cell phones.
DanI'm seeing some metadata in the subreddit that I'm now paying attention to. Oh, yeah. So one of them is like a category within the subreddit. For example, I'm looking at job search and hiring. Is that the sticky category?
KarenOh, so that's what's called flare. Oh. And there's two types of flair. There's post flare, which is what you're saying. So you have to tag your post as job search or career or tools and AI or examples and inspiration. That was an elaborate project that I went through. I actually did a podcast with Lou Rosenfeld and my business partner Jeff Eaton, where we talked about how we used AI and how I did an entire like encoding project. IA and me got to finally like unleash myself on the sub with some real data.
DanSo that was more than just a few hours that week.
KarenThat was actually a labor of love that Jeff and I did because we wanted to test out a bunch of different AI models on how effective they would be at categorizing a corpus of data. It was kind of doing double duty.
DanYeah.
KarenThe feedback on it was that the AI was not that great and actually didn't do nearly as good of a job as I did. But maybe I'm flattering myself, but no, I'm right about that. But then there's also what's called user flare. So that means that you as a user can set a flare for yourself. Right. We use it in a very specific way, in that there's one post type where in order to respond to it, you have to have your user flare set. And so the post type is called Answers from Seniors Only. Unless you have set your flare to say that you are either experienced in the field or a veteran in the field, your comment will automatically get deleted. That one is also a little controversial. People have been a little mad about it, but I actually like it. It's only one post type, and it's a way for people who want users who have jumped through a very, very small hoop to validate who they are can respond. And it I can see all the comments, so I can see the ones that are removed, and it actually does increase the quality of the comments that people get. The level of effort that people put into those comments is higher.
DanYou're looking at quality. Our lens is misinformation, and I would imagine that there's maybe an opportunity for us to see this as a potential approach to misinformation, that contributions are constrained by some, as you said, small hoop to jump through, you know, but they're constrained to a certain group of people who are trusted in some way.
KarenYeah. People can lie. Like we have no validation behind it. But my take on it is if you have a year of experience in the field and you want to lie and say that you're more experienced than that, you're a veteran, okay. Go ahead.
DanYou're helping nobody. Right.
KarenYeah, you're not. If if it's that important to you, then please do it. But no, it's it honestly, it really is just like a minor thing that we're asking people to do to validate who they think they are.
DanRight.
RachelOkay. Reddit notoriously sold content to the AI companies to train LLMs. And now people are using LLMs to generate content to post on Reddit, as we kind of spoke to earlier, these AI bot posts. In our pre-interview, you described this as an imminent content collapse. Dire. As a content strategy professional, what do you do to prevent the collapse of Reddit? Karen, how are you gonna save us?
KarenHonestly, I I mean, on one hand, I don't care at all about the collapse of Reddit. They've made their decision. They went public and they're making money off of selling content to the LLMs, so it's kind of their job to figure out how to prevent that content collapse. But I do think the content collapse overall is a real problem. Like there's plenty of data out there that says that if you have AI models being trained on AI-generated content, eventually the context collapses completely and they start just producing gibberish. So that's a danger that we should all be aware of. In the same token, though, I would say Reddit is both an extremely trustworthy source of information because it is coming from people who are very knowledgeable. Like you would not believe how amazing our laundry is. It's amazing. If you want to get really good at doing your laundry, there's a big fight there today about whether lipase is an enzyme that actually improves the quality of your laundry or not. And it's very funny, but it's also like super in-depth information about chemistry, and I love it.
RachelI also understand there's one particular influencer on the laundry Reddit that everyone bows down to because they are apparently a genius.
KarenKids may aesthetics also know that Lather Daddy. Lather Daddy got a book deal out of it. Oh wow, that's amazing. Yeah, but you know, also like Reddit is a source of like shit posting. People are making jokes there all the time. And so the LLMs are not good at picking up sarcasm or picking up when people are just like goofing around. And so that's how you get the Google AI telling you to add glue to your pizza, right? Because it couldn't tell the difference of here's an actual good idea versus here's somebody who is just cracking a joke.
DanI rarely use Reddit, and I think because there's sort of a high barrier to entry. Like you need to spend some time looking at it and digesting it and understanding it. And Karen, you compared it to Usenet, and I remember Usenet days I mean, I still carry this habit with me. You lurk for a week before you post anything. Like you lurk for a certain amount of time because you don't want to violate what we used to call netiquette on any of these uh boards. Now I think it's uh significantly looser than it was.
RachelWhat is this Usenet you speak of?
DanOkay, all right, settled out. So here's what I was getting at, though. I think Reddit is a genuinely unique space on the internet. And so the web in particular has become so saney that even the latest technology, an uh LLM power chatbot, is not unique. We have several of those to pick from, but there's only one Reddit. So if Reddit collapses and goes away, my question for you, Karen, is what do we lose?
KarenI think there will always be a home for long form user-generated content someplace. You may not remember this, but Dig used to be the big platform 15-ish years ago. And I was a Dig user. I wasn't on Reddit then. And Dig did a massive redesign that alienated every one of their users, it crushed the company. I mean, it was a a genuine UX case study of how you can do a redesign and absolutely destroy your entire business. And there was a massive influx of users to Reddit at that point. I was one of them. And I think if Reddit were to crumble tomorrow, the same thing would happen. It would be the same thing where when Twitter got taken over, everybody moved to Blue Sky. It didn't happen immediately. And Blue Sky's not Twitter, and it definitely doesn't have the scale that Twitter had. So if to answer your question, I guess what would be lost would be the scale that Reddit has managed to achieve that you can't build up in a day. But using it collapsed, and then other things emerged out of it. I think you just have to accept that that's sort of an ebb and flow on the internet. Like things get too big and then they crumble and then something else emerges out of it.
RachelIsn't it a particular kind of mushroom that always grows on a burned tree trunk? So like when a tree f falls down or rots or burns, it creates this feconed environment for other things to grow in its place. And I feel like the internet is this.
KarenThere's a hunger, I think, for people. I'm a text person, right? I'm not on TikTok because I don't do video, which it doesn't engage me. And I think there's just a genuine desire for people to like have unfettered access to a text box where they can type things into. Whether that's a federated community like Reddit, whether that's the comments section on a blog, whether that's a short form social platform like Twitter or Blue Sky, you know, Facebook, whatever. Some people out there really hunger for that, and I think we'll always have that in some form. In the same way, I think like other people who absolutely love video, there will continue to be ever more video, short form video platforms for people to do that if that's their deal.
DanKaren's conversation exceeded expectations. Of course. I mean impossible because expectations were already very high, and yet she did it. She did it. What did you take away from that?
RachelThe quietest, most behind-the-scenes people in a domain have actually some of the most influence and the most impact. Yes. I'm thinking about book editors and Reddit moderators.
DanRight. They're cut from the same cloth.
RachelYeah.
DanI really liked hearing about the way they can set up certain kinds of topics and then they consolidate all, in this case, job search complaint stuff under a single thread as a way of keeping the overall board, the overall subreddit very tidy. You know, it's interesting to hear how they've adapted the existing space. And it's a good reminder to us that uh users will show up and adapt the space any way they see fit in order to achieve their goals.
RachelYeah. So it was a lens that came out for you.
DanI called it curation escape. And I don't know that it is really exactly a misinformation thing, but I just loved uh hearing Karen talk about the algorithm. And the algorithm is set up to promote posts that uh you know get a lot of traffic, that there's a lot of interest around. And that if me as a regular user coming in looking at Reddit, I'll see an aggregation of posts from all the different channels that I subscribe to. And again, those are promoted based on the activity or what have you. As a moderator, she gets to see those posts in. Chronological order. She highlights that she really missed that or she really appreciates the ability to do that. And that struck me as like an opportunity to kind of escape the algorithm. So I call this like there is algorithmically editorialized curation, right? And at some point you just want to say, I don't want to see any of this. I just want to see the posts as they've come up. And so I guess the challenge when one is designing an information space is to ask yourself, how does the system allow users to escape or ease up on that algorithmically editorialized curation?
RachelI love that you called this out because I feel like from a library science perspective, this is such an important behavior to support. But I can't actually think of a single time in my product design career where I've explicitly had this conversation. I can think of one time when we end up talking about lists. Yes. And the the conversation usually comes down to personalization versus a list. Right. And by list I mean like an objective A to Z situation. Right. And I think that's really what we're getting at. We mostly talk about the benefits of personalizing results because we believe, as the designers of the algorithm, that if we do a good enough job, then we are making your life better by personalizing the results. But we don't often talk about that, I think, equally important need to just objectively surface information and let a person do with it what they will and like make their way through it in the way that they are going to make their way through it without your tools in the way.
DanI mean, my philosophy is that there's tremendous value in that even marginal extra friction, right? That there's value in the process of going through a set of information. Prediction time, I mean, if we're looking at how information spaces are going to evolve over time in this day and age, there's going to be more and more information all the time. Therefore, people like us who are creating these products are going to be searching for ways to make that information more digestible. And one way to do it is to curate it, right? Is to sort of apply an algorithm to say yes or no, whether something should be prioritized. And moreover, we have through large language models these tools that purport to help us potentially do that kind of stuff. So I think although we might not feel it right now, we're going to get to a place where a lot of information systems may have the level of information that Reddit does today. And we will be tempted to use this black box technology to just sort of slap it on and create a curation algorithm. And in those instances, users will definitely need an opportunity to step back.
RachelYeah, I think that's a really good point. It made me think too about one of the first places where I was imagining contextualizing this conversation you and I are having right now is in the search space. And then I was like, we've been using like search algorithms forever. Like relevance information retrieval devices have been deploying algorithms like this forever because you can't turn up everything. But this is really making me think like that's great. We need those things. And what's getting lost as a result. And when's the last time I sat and thought about that? Not recently.
DanWhat's your lens?
RachelMy lens is more of a band name than a lens, I think.
DanI need to get better at naming my lenses. I like where this is going.
RachelMy lens is swarm.
DanYou can't see, but Rachel did a little hand gesture.
RachelI did like a little death metal hand gesture with that. Uh swarm. Swarm. This lens asks us you know, how does the system react to swarm behavior? For reference, uh, swarm behavior is an emergent behavior. This is from Wikipedia. It's an emergent behavior arising from simple rules that are followed by individuals and does not involve any central coordination. During our conversation with Karen, we talked a little bit about users flagging what they believe are AI posts and calling it out. We talked a little bit about the role of the moderator versus what users do to kind of flag this stuff. And then this is also going off of my personal experience on Reddit, watching users just swarm on posts to either call them out as dangerous, get moderators' attentions, or just be swarmy. And so this lens is really about asking like, how does the system benefit from swarm behavior? And this is a neutral question, really. I wrote this in a couple different ways in my notes. Like, how does the system exploit swarm behavior? How does it manipulate swarm behavior? How does it rely on swarm behavior in order to work? Like, I think Reddit kind of relies quite a bit on swarm behavior. I've also been thinking a lot about swarm behavior. We're recording this while Minneapolis is occupied at the moment, uh, which is heartbreaking. I used to live in Minneapolis. And yes, this is a tough time. Anyway, so shockingly, I was thinking about swarm behavior. So this lens is really thinking about how this kind of uncoordinated, simple, emergent behavior that humans engage in can be part of how a system was designed, like a system might be designed to elicit it. Or if your system can't handle it, what are you gonna do then? You know? And I don't have some magical takeaway about why we have to care about swarm behavior so much as with Karen's conversation, it just occurred to me like this is a thing we haven't talked about yet.
DanIt's a really good call out. When you put a whole bunch of people in a space, they will swarm. And I think it's really useful to identify that it happens, to know that it happens, that it will happen, and to start to think about what will trigger a swarm. You and Karen got into a conversation where people would post sort of these disclaimers to try and head off the swarm, right? They're like, hey, I'm just a 16-year-old kid who's just trying to learn about personal finance, don't come at me, right?
RachelYeah, like don't swarm me, please.
DanRight. I think that's great. It means that the people participating in the space also know that this is a possibility and for non-sociopaths among us might discourage people from abusing the anonymity or the other opportunities that the tool affords.
RachelYeah. The other thing that appeals to me about this lens is that I feel like often when I go to lenses, I think about individual behaviors and how an individual behavior, you know, how a system can handle an individual choice. And this is really about a system accounting for or not this other entity that is not coordinated, that is completely dispersed, but is, you know, like a swarm of swallows ends up moving in these directions uh in in group mindset.
DanYeah. Swarm.
RachelSwarm.
DanAnd that was Unchecked. Thanks so much for listening. We really want to hear from you. If you've got ideas for topics or guests or stories, drop us a line at unchecked at curious-squid.com. If you made use of the lenses that we described today in your practice, we want to hear about that too. Hey, check the show notes for any of the links that we talked about today, and it would really mean a lot to us if you shared this episode with a friend and rated and reviewed us on your favorite podcast platform. Thank you.