People were not Meant to Talk This Much? by kerminator .....
The lower the threshold for trust and spread, the more the ideas produced by any random person circulate unfettered. Worse, people share the most emotionally arousing ideas, stories, images, and other materials. *** people could publish writing, images, videos, and other material without first getting the endorsement of publishers or broadcasters. Ideas spread freely beyond borders.
Date: 10/26/2021 11:14:12 PM ( 3 y ago)
Technology:
People Aren’t Meant to Talk This Much
Breaking up social media companies is one way to fix them. Shutting their users up is a better one.
By Ian Bogost
October 22, 2021
Your social life has a biological limit: of 150.
That’s the number—Dunbar’s number, proposed by the British psychologist Robin Dunbar three decades ago—of people with whom you can have meaningful relationships.
What makes a relationship meaningful?
Dunbar gave The New York Times a shorthand answer: “those people you know well enough to greet without feeling awkward if you ran into them in an airport lounge”. a take that may accidentally reveal the substantial spoils of having produced a predominant psychological theory.
The construct encompasses multiple “layers” of intimacy in relationships. We can reasonably expect to develop up to 150 productive bonds, but we have our most intimate, and therefore most connected, relationships with only about five to 15 closest friends. We can maintain much larger networks, but only by compromising the quality or sincerity of those connections; most people operate in much smaller social circles.
Some critics have questioned Dunbar’s conclusion, calling it deterministic and even magical. Still, the general idea is intuitive, and it has stuck. And yet, the dominant container for modern social life—the social network—does anything but respect Dunbar’s premise.
Online life is all about maximizing the number of connections without much concern for their quality. On the internet, a meaningful relationship is one that might offer diversion or utility, not one in which you divulge secrets and offer support.
Read: You can only maintain so many close friendships
A lot is wrong with the internet, but much of it boils down to this one problem: We are all constantly talking to one another. Take that in every sense.
Before online tools, we talked less frequently, and with fewer people. The average person had a handful of conversations a day, and the biggest group she spoke in front of was maybe a wedding reception or a company meeting, a few hundred people at most. Maybe her statement would be recorded, but there were few mechanisms for it to be amplified and spread around the world, far beyond its original context.
Online media gives every person access to channels of communication previously reserved for Big Business. Starting with the world wide web in the 1990s and continuing into user-generated content of the aughts and social media of the 2010s, control over public discourse has moved from media organizations, governments, and corporations to average citizens.
Finally, people could publish writing, images, videos, and other material without first getting the endorsement of publishers or broadcasters. Ideas spread freely beyond borders.
And we also received a toxic dump of garbage. The ease with which connections can be made—along with the way that, on social media, close friends look the same as acquaintances or even strangers—means any post can successfully appeal to people’s worst fears, transforming ordinary folks into radicals.
That’s what YouTube did to the Christchurch shooter, what conspiracy theorists preceding QAnon did to the Pizzagaters, what Trumpists did to the Capitol rioters. And, closer to the ground, it’s how random Facebook messages scam your mother, how ill-thought tweets ruin lives, how social media has made life in general brittle and unforgiving.
It’s long past time to question a fundamental premise of online life: What if people shouldn’t be able to say so much, and to so many, so often?
The process of giving someone a direct relationship with anyone else is sometimes called disintermediation, because it supposedly removes the intermediaries sitting between two parties. But the disintermediation of social media didn’t really put the power in the hands of individuals. Instead, it replaced the old intermediaries with new ones: Google, Facebook, Twitter, many others.
These are fewer technology companies than data companies: They suck up information when people search, post, click, and reply, and use that information to sell advertising that targets users by ever-narrower demographic, behavioral, or commercial categories. For that reason, encouraging people to “speak” online as much as possible is in the tech giants’ best interest. Internet companies call this “engagement.”
Recommended Reading
A pale green background peppered with squares containing faces, and overlayed with red concentric circles
You Can Only Maintain So Many Close Friendships
Sheon Han
The Facebook logo in the center of a radiation warning symbol
Facebook Is a Doomsday Machine
Adrienne LaFrance
A shredded photo of shipping containers
Stop Shopping
Amanda Mull
The gospel of engagement duped people into mistaking using the software with carrying out meaningful or even successful conversations. A bitter tweet that produces chaotic acrimony somehow became construed as successful online speech rather than a sign of its obvious failure. All those people posting so often seemed to prove that the plan was working. Just look at all the speeches!
Thus, the quantity of material being produced, and the size of the audiences subjected to it, became unalloyed goods. The past several years of debate over online speech affirm this state of affairs. First, the platforms invented metrics to encourage engagement, such as like and share counts. Popularity and reach, of obvious value to the platforms, became social values too. Even on the level of the influencer, the media personality, or the online mob, scale produced power and influence and wealth, or the fantasy thereof.
The capacity to reach an audience some of the time became contorted into the right to reach every audience all of the time. The rhetoric about social media started to assume absolute liberty always to be heard; any effort to constrain or limit users’ ability to spread ideas devolved into nothing less than censorship. But there is no reason to believe that everyone should have immediate and constant access to everyone else in the world at all times.
My colleague Adrienne LaFrance has named the fundamental assumption, and danger, of social media megascale: “not just a very large user base, but a tremendous one, unprecedented in size.” Technology platforms such as Facebook assume that they deserve a user base measured in the billions of people—and then excuse their misdeeds by noting that effectively controlling such an unthinkably large population is impossible. But technology users, including Donald Trump and your neighbors, also assume that they can and should taste the spoils of megascale. The more posts, the more followers, the more likes, the more reach, the better.
This is how bad information spreads, degrading engagement into calamity the more attention it accrues. This isn’t a side effect of social media’s misuse, but the expected outcome of its use. As the media scholar Siva Vaidhyanathan puts it, the problem with Facebook is Facebook.
Grouped - How Small Groups Of Friends Are The Key To Influence On The Social WebPaul Adams, New Riders Pub
When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
Read: Facebook is a Doomsday Machine
So far, controlling that tidal wave of content has been seen as a task to be carried out after the fact. Companies such as Facebook employ (or outsource) an army of content moderators, whose job involves flagging objectionable material for suppression. That job is so terrible that it amounts to mental and emotional trauma. And even then, the whole affair is just whack-a-mole, stamping out one offending instance only for it to reappear elsewhere, perhaps moments later.
Determined to solve computing’s problems with more computing, social-media companies are also trying to use automated methods to squelch or limit posts, but too many people post too many variations, and AI isn’t sufficiently discerning for the techniques to work effectively.
Regulatory intervention, if it ever comes, also won’t solve the problem. No proposal for breaking up Facebook would address the scale issue; the most likely scenario would just split Instagram and WhatsApp off from their parent. These entities are already global, managing billions of users via a single service.
You wouldn’t get WhatsApp Pakistan, baby-Bell style. And even if you did, the scale of access people have to one another’s attention within those larger communities would still remain massive. Infinite free posts, messages, and calls have made communication easier but also changed its nature—connecting people to larger and larger audiences more and more often.
Wouldn’t it just be better if fewer people posted less stuff, less frequently, and if smaller audiences saw it?
Limiting social media may seem impossible, or tautological. But, in fact, these companies have long embraced constraints. Tweets can be 280 characters and no more. YouTube videos for most users cannot exceed 15 minutes—before 2010, the limit was 10, helping establish the short-form nature of the online video. Later, Vine pushed brevity to its logical extreme, limiting videos to six seconds in length. Snapchat bet its success on ephemerality, with posts that vanish after a brief period rather than persist forever.
Even the capacity to respond to a Facebook post, Twitter DM, Slack message, or other online matter with likes, emotes, or emoji constrains what people can do when they use those services. Those constraints often feel curious or even disturbing, but winnowing the infinity of possible responses down to a few shorthands creates boundaries.
Yet despite the many material limitations that make popular online tools what they are, few platforms ever limit the volume of posts or the reach of users in a clear and legible way.
Imagine if access and reach were limited too: mechanically rather than juridically, by default? What if, for example, you could post to Facebook only once a day, or week, or a month? Or only to a certain number of people? Or what if, after an hour or a day, the post expired, Snapchat style? Or, after a certain number of views, or when it reached a certain geographic distance from its origins, it self-destructed?
That wouldn’t stop bad actors from being bad, but it would reduce their ability to exude that badness into the public sphere.
Such a constraint would be technically trivial to implement. And it’s not entirely without precedent. On LinkedIn, you can amass as large a professional network as you’d like, but your profile stops counting after 500 contacts, which purportedly nudges users to focus on the quality and use of their contacts rather than their number.
Nextdoor requires members to prove that they live in a particular neighborhood to see and post to that community (admittedly, this particular constraint doesn’t seem to have stopped bad behavior on its own). And I can configure a post to be shown only to a specific friend group on Facebook, or prevent strangers from responding to tweets or Instagram posts. But these boundaries are porous, and opt-in.
A better example of a limited network exists, one that managed to solve many of the problems of the social web through design, but that didn’t survive long enough to see the perks of its logic. It was called Google+.
In 2010, Paul Adams led a social-research team at Google, where he hoped to create something that would help people maintain and build relationships online. He and his team tried to translate what sociologists already knew about human relationships into technology.
Among the most important of those ideas: People have relatively narrow social relationships. “We talk to the same, small group of people again and again,” Adams wrote in his 2012 book, Grouped. More specifically, people tend to have the most conversations with just their five closest ties. Unsurprisingly, these strong ties, as sociologists call them, are also the people who hold the most influence over us.
This understanding of strong ties was central to Google+. It allowed users to organize people into groups, called circles, around which interactions were oriented. That forced people to consider the similarities and differences among the people in their networks, rather than treating them all as undifferentiated contacts or followers. It makes sense: One’s family is different from one’s work colleagues, who are different from one’s poker partners or church members.
Adams also wanted to heed a lesson from the sociologist Mark Granovetter: As people shift their attention from strong to weak ties, the resulting connections become more dangerous. Strong ties are strong because their reliability has been affirmed over time. The input or information one might receive from a family member or co-worker is both more trusted and more contextualized. By contrast, the things you hear a random person say at the store (or on the internet) are—or should be—less intrinsically trustworthy.
But weak ties also produce more novelty, precisely because they carry messages people might not have seen before. The evolution of a weak tie to a strong one is supposed to take place over an extended time, as an individual test and considers the relationship and decides how to incorporate it into their life. As Granovetter put it in his 1973 paper on the subject, strong ties don’t bridge between two different social groups. New connections require weak ties.
Weak ties can lead to new opportunities, ideas, and perspectives—this feature characterizes their power. People tend to find new job opportunities and mates via weak ties, for example. But online, we encounter a lot more weak ties than ever before, and those untrusted individuals tend to seem similar to reliable ones—every post on Facebook or Twitter looks the same, more or less. Trusting weak ties becomes easier, which allows influences that were previously fringe to become central, or influences that are central to reinforce themselves.
Granovetter anticipated this problem back in the early ’70s: “Treating only the strength of ties,” he wrote, “ignores … all the important issues regarding their content.”
Adams’s book feels like a prediction of everything that would go wrong with the internet. Ideas spread easily, Adams writes, when they get put in front of lots of people who are easy to influence. And in turn, those people become vectors for spreading them to other adopters, which is much quicker when masses of easily influenced people are so well connected—as they are on social media.
When people who take longer to adopt ideas eventually do so, Adams concludes, it’s “because they were continuously exposed to so many of their connections adopting.”
The lower the threshold for trust and spread, the more the ideas produced by any random person circulate unfettered. Worse, people share the most emotionally arousing ideas, stories, images, and other materials.
You know how this story ends. Facebook built its services to maximize the benefit of weak-tie spread to achieve megascale. Adams left Google for Facebook in early 2011, before Google+ had even launched. Eight years later, Google unceremoniously shut down the service.
Up until now, social reform online has been seen either as a problem for even more technology to solve, or as one that demands regulatory intervention. Either option moves at a glacial pace. Facebook, Google, and others attempt to counter misinformation and acrimony with the same machine learning that causes them. Critics call on the Department of Justice to break up these companies’ power, or on Congress to issue regulations to limit it.
Facebook set up an oversight board, fusing its own brand of technological solutionism with its own flavor of juridical oversight. Meanwhile, the misinformation continues to flow, and the social environment continues to decay from its rot.
Imposing more, and more meaningful, constraints on internet services, by contrast, is both aesthetically and legally compatible with the form and business of the technology industry. To constrain the frequency of speech, the size or composition of an audience, the spread of any single speech act, or the life span of such posts is entirely accordant with the creative and technical underpinning of computational media. It should be shocking that you pay no mind to recompose an idea so it fits in 280 characters, but that you’d never accept that the resulting message might be limited to 280 readers or 280 minutes. And yet, nothing about the latter is fundamentally different from the former.
Regulatory interventions have gotten nowhere because they fail to engage with the material conditions of megascale, which makes policing all those people and all that content simply too hard. It’s also divided the public over who ought to have influence. Any differential in perceived audience or reach can be cast as bias or censorship.
The tech companies can’t really explain why such differences arise, because they are hidden inside layers of apparatus, nicknamed The Algorithm. In turn, the algorithm becomes an easy target for blame, censure, or reprisal. And in the interim, the machinery of megascale churns on, further eroding any trust or reliability in the information of any kind—including understandings of how social software currently operates or what it might do differently.
Conversely, design constraints on the audience and reach that apply equally to everyone offer a means to enforce the suppression of contact, communication, and spread. To be effective, those constraints must be clear and transparent—that’s what makes Twitter’s 280-character format legible and comprehensible.
They could also be regulated, implemented, and verified—at least more easily than pressuring companies to better moderate content or to make their algorithms more transparent. Finally, imposing hard limits on online social behavior would embrace the skills and strengths of computational design, rather than attempting to dismantle them.
This would be a painful step to take, because everyone has become accustomed to megascale. Technology companies would surely fight any effort to reduce growth or engagement. Private citizens would bristle at new and unfamiliar limitations. But the alternative, living amid the ever-rising waste spewed by megascale, is unsustainable.
If megascale is the problem, downscale has to be the solution, somehow. That goal is hardly easy, but it is feasible, which is more than some competing answers have going for them. Just imagine how much quieter it would be online if it weren’t so loud.
Grouped - How Small Groups Of Friends Are The Key To Influence On The Social WebPaul Adams, New Riders Pub
Ian Bogost is a contributing writer at The Atlantic and the Director of the Program in Film & Media Studies at Washington University in St. Louis. His latest book is Play Anything.
Popularity: message viewed 851 times
URL: http://www.curezone.org/blogs/fm.asp?i=2443365
<< Return to the standard message view
Page generated on: 11/25/2024 10:21:08 AM in Dallas, Texas
www.curezone.org