"Readocracy is on a mission to bring integrity back to the internet." Get credit for what you read online. Prove you are well-informed on any subject.

You’ve been invited to join
🔑

You're one of the first people joining the hub.

You’ll also gain access to ’s Collections, Discussions, Library, and more — and also get recognized for the helpful content you share.

This hub’s main subjects are:

This hub’s personal interest subjects (shared casual interests) are:

Reading you do on these subjects that you save publicly, will be shared with this hub. To edit this / view advanced filtering, click here.

To finish joining , you must complete all the onboarding steps. If you exit now, your invitation will not yet be accepted.

You’ve been invited to join 🔑

Now your expertise and interests can be recognized, leveraged, and members with similar interests can find you within . You’ll also gain access to ’s Collections, Discussions, Library, and more.

Let’s make this work in a way you’re 100% comfortable with:

The fun part: some custom questions chosen by .
(Answers are public.)

Don’t overthink it, you can edit these later.

Thanks, , you can now join!

These are ’s official subjects.
First, choose which are relevant to you:

Professional Subjects
(career-related)

Personal interest Subjects
(hobbies, misc interests, etc.)

Great. Now edit the below. These are the subjects which others will see are associated with you within . To remove any, just click them.

Professional Subjects
(career-related)

Personal interest Subjects
(hobbies, misc interests, etc.)

’s official subjects are below.
First, choose which are relevant to you:

Professional Subjects
(career-related)

Personal interest Subjects
(hobbies, misc interests, etc.)

Great. Now edit the below. These are the subjects which others will see are associated with you within .

Currently, none are selected. Please click on the ones you’re comfortable being associated with you.

Professional Subjects
(career-related)

Personal interest Subjects
(hobbies, misc interests, etc.)

New hub tooltip icon For bringing together a company, team, class, or community under one hub. Where you can better aggregate, organize, and keep up with each other’s learning and expertise (in a way that is privacy-first/every member controls), and, optionally, selectively showcase it externally.

We found the following similar hubs:

You can join any of them by clicking the contact button.

Choose at least 54 subjects that this team is passionate about. At least 1 must be a Personal subject Personal subjects are subjects that are not career-related. E.g. your hub might have many people who are into the NBA, or Food & Drink, or Travel, etc.  and at least 43 Professional subjects. You can customize subjects later, too.

            Personal subjects are subjects that are not career-related. E.g. your hub might have many people who are into the NBA, or Food & Drink, or Travel, etc.

            Add another

            Does your team / community have any functional groupings?
            E.g. Engineering, Marketing, Accounting, etc.E.g. Host, Speaker, Subscriber, etc.

            tooltip icon When people join, this option lets you or them choose which grouping they associate with. This then allows everyone to filter more efficiently in your dashboard.

            Add another

            Choose an avatar for your Hub

            Loading...

            Verified by award-winning technology. play_arrow See how and why this credential matters more than most, in under 90 seconds. launch

            Dec 12, 2024

            Authentic Badge

            Earned by Mario Vasilescu
            for Good Internet Citizenship

            65 credits earned across

            7 articles

            17 contributions

            See Mario’s learning for yourself.

            This is Mario’s Verified Learning Transcript.

            Content most deeply engaged with

            Modern technologies are amplifying these biases in harmful ways, however. Search engines direct Andy to sites that inflame his suspicions, and social media connects him with like-minded people, feeding his fears. Making matters worse, bots—automated social media accounts that impersonate humans—enable misguided or malevolent actors to take advantage of his vulnerabilities.

            Compounding the problem is the proliferation of online information. Viewing and producing blogs, videos, tweets and other units of information called memes has become so cheap and easy that the information marketplace is inundated. Unable to process all this material, we let our cognitive biases decide what we should pay attention to. These mental shortcuts influence which information we search for, comprehend, remember and repeat to a harmful extent.

            One of the first consequences of the so-called attention economy is the loss of high-quality information. The OSoMe team demonstrated this result with a set of simple simulations. It represented users of social media such as Andy, called agents, as nodes in a network of online acquaintances. At each time step in the simulation, an agent may either create a meme or reshare one that he or she sees in a news feed. To mimic limited attention, agents are allowed to view only a certain number of items near the top of their news feeds.

            Running this simulation over many time steps, Lilian Weng of OSoMe found that as agents' attention became increasingly limited, the propagation of memes came to reflect the power-law distribution of actual social media: the probability that a meme would be shared a given number of times was roughly an inverse power of that number. For example, the likelihood of a meme being shared three times was approximately nine times less than that of its being shared once.

            even when we want to see and share high-quality information, our inability to view everything in our news feeds inevitably leads us to share things that are partly or completely untrue.

            even when people encounter balanced information containing views from differing perspectives, they tend to find supporting evidence for what they already believe. And when people with divergent beliefs about emotionally charged issues such as climate change are shown the same information on these topics, they become even more committed to their original positions.

            search engines and social media platforms provide personalized recommendations based on the vast amounts of data they have about users' past preferences. They prioritize information in our feeds that we are most likely to agree with—no matter how fringe—and shield us from information that might change our minds. This makes us easy targets for polarization.

            when people were isolated into “social” groups, in which they could see the preferences of others in their circle but had no information about outsiders, the choices of individual groups rapidly diverged. But the preferences of “nonsocial” groups, where no one knew about others' choices, stayed relatively stable.

            Social media follows a similar dynamic. We confuse popularity with quality and end up copying the behavior we observe.

            if everybody else is talking about it, it must be important. In addition to showing us items that conform with our views, social media platforms such as Facebook, Twitter, YouTube and Instagram place popular content at the top of our screens and show us how many people have liked and shared something. Few of us realize that these cues do not provide independent assessments of quality.

            programmers who design the algorithms for ranking memes on social media assume that the “wisdom of crowds” will quickly identify high-quality items; they use popularity as a proxy for quality.

            The first person in the social diffusion chain told the next person about the articles, the second told the third, and so on. We observed an overall increase in the amount of negative information as it passed along the chain—known as the social amplification of risk.

            Even worse, social diffusion also makes negative information more “sticky.” When Jagiello subsequently exposed people in the social diffusion chains to the original, balanced information—that is, the news that the first person in the chain had seen—the balanced information did little to reduce individuals' negative attitudes.

            during Spain's 2017 referendum on Catalan independence, social bots were leveraged to retweet violent and inflammatory narratives, increasing their exposure and exacerbating social conflict.

            Our simulations show that these bots can effectively suppress the entire ecosystem's information quality by infiltrating only a small fraction of the network. Bots can also accelerate the formation of echo chambers by suggesting other inauthentic accounts to be followed, a technique known as creating “follow trains.”

            Some manipulators play both sides of a divide through separate fake news sites and bots, driving political polarization or monetization by ads. At OSoMe, we recently on Twitter that were all coordinated by the same entity. Some pretended to be pro-Trump supporters of the Make America Great Again campaign, whereas others posed as Trump “resisters”; all asked for political donations.uncovered a network of inauthentic accounts

            users are more likely to share low-credibility articles when they believe that many other people have shared them.

            Speaks to the power of social signalling around information. What if it didn't just showed many other people shared, but the information track record around them? How many *credible* users shared?

            institutional changes are also necessary to curb the proliferation of fake news.

            One of the best ideas may be to make it more difficult to create and share low-quality information. This could involve adding friction by forcing people to pay to share or receive information. Payment could be in the form of time, mental work such as puzzles, or microscopic fees for subscriptions or usage. Automated posting should be treated like advertising.

            Friction! Friction by design will lead to a better internet. Of course, this is a problem when the main monetization scheme relies on minimal friction and maximum impulse.

            Free communication is not free. By decreasing the cost of information, we have decreased its value and invited its adulteration. To restore the health of our information ecosystem, we must understand the vulnerabilities of our overwhelmed minds and how the economics of information can be leveraged to protect us from being misled.

            As the saying goes, "free as in freedom", or "free, as in free beer?"

            is a professor of psychology and director of the Behavioral and Data Science master's program at the University of Warwick in England. His research addresses the evolution of mind and information.Thomas Hills

            Filippo Menczer, Luddy Distinguished Professor of Informatics and Computer Science, Indiana University.

            In a fascinating 2006 study involving 14,000 Web-based volunteers, Matthew Salganik, then at Columbia University, and his colleagues found that when people can see what music others are downloading, they end up downloading similar songs.

            On the surface, seems obvious. But look closer and it provokes profound questions about the homogenization of opinion and society in a social media-driven world.

            In 2017 we estimated that up to 15 percent of active Twitter accounts were bots—and that they had in the spread of misinformation during the 2016 U.S. election period.played a key role

            Has Musk reached out? :|

            Time Spent

            1h 5m

            # of Content Items

            7 items

            7 articles

            # of Contributions

            17

            Full list of content consumed, including annotations

            37 highlights & notes

            2 minutes Engaged reading, read (01/13/22)

            people in general, for one reason or another, like short objections better than long answers

            The Gish gallop, a term coined in 1994 to refer to creationism debates, is a rhetorical technique that relies on overwhelming an opponent with specious arguments, half-truths, and misrepresentations that each require considerably more time to refute or fact-check than they did to state in the first place.

            wikipedia.org |

            Brandolini's law - Wikipedia

            Brandolini's law, also known as the bullshit asymmetry principle, is an internet adage that emphasizes the difficulty of debunking false, facetious, or otherwise misleading information:[1] "The amount of energy needed to refute bullshit is an order of magnitude larger than is needed to produce it."[2][3] It was publicly formulated the first time in January 2013[4] by Alberto Brandolini, an Italian programmer. Brandolini stated that he was inspired by reading Daniel Kahneman's Thinking, Fast and Slow right before watching an Italian political talk show with journalist Marco Travaglio and former Prime Minister Silvio Berlusconi attacking each other.[5][6]

            3 minutes Engaged reading, read (11/01/21)

            wikipedia.org |

            Online disinhibition effect - Wikipedia

            Online disinhibition effect is the lack of restraint one feels when communicating online in comparison to communicating in-person.[1] People feel safer saying things online which they would not say in real life because they have the ability to remain completely anonymous and invisible behind the computer screen.[2] Apart from anonymity, other factors such as asynchronous communication, empathy deficit, or individual personality and cultural factors also contribute to online disinhibition.[3][4] The manifestations of such an effect could be in both positive and negative directions. Thus online disinhibition could be classified as either benign disinhibition or toxic disinhibition.[1]

            10 minutes Engaged reading, read (10/25/20)

            There has always been a form of censorship: "The day Facebook went algorithmic, with an advertising-based business model, they started censoring the boring and mundane." This is a brilliant take: "From the day they went algorithmic, Facebook started throttling the un-engaging. The boring among became voiceless. Tempered viewpoints are shut down. The mundane moments of life are no longer worth sharing. That off-center photo, not-quite-in-focus photo of your kids had every right to show up in your friends' feeds as the latest post from Dan Bongino, but it won't." We are all products of the platforms. The medium is the message. "Social platforms have very carefully trained us on how to "not be boring" in the very specific mold that they define."

            readmargins.com |

            Facebook is censoring me

            Ranjan here, and this week I’ll be writing on social media and censorship. I know this might feel like a lot of Facebook posts in a row from us, but it’s hard, given the proximity to Nov 3rd, to not focus on these topics. Last week Don Jr. tweeted: October 18th 2020 I feel you Don Jr, I really do, because I, too, have felt the heavy hand of Silicon Valley throttling my communications to my friends and followers. Facebook, Instagram, Twitter - they've all censored me, and in this post, I'm ready to fight back.

            25 minutes Engaged reading, read (05/16/22)

            Modern technologies are amplifying these biases in harmful ways, however. Search engines direct Andy to sites that inflame his suspicions, and social media connects him with like-minded people, feeding his fears. Making matters worse, bots—automated social media accounts that impersonate humans—enable misguided or malevolent actors to take advantage of his vulnerabilities.

            Compounding the problem is the proliferation of online information. Viewing and producing blogs, videos, tweets and other units of information called memes has become so cheap and easy that the information marketplace is inundated. Unable to process all this material, we let our cognitive biases decide what we should pay attention to. These mental shortcuts influence which information we search for, comprehend, remember and repeat to a harmful extent.

            One of the first consequences of the so-called attention economy is the loss of high-quality information. The OSoMe team demonstrated this result with a set of simple simulations. It represented users of social media such as Andy, called agents, as nodes in a network of online acquaintances. At each time step in the simulation, an agent may either create a meme or reshare one that he or she sees in a news feed. To mimic limited attention, agents are allowed to view only a certain number of items near the top of their news feeds.

            Running this simulation over many time steps, Lilian Weng of OSoMe found that as agents' attention became increasingly limited, the propagation of memes came to reflect the power-law distribution of actual social media: the probability that a meme would be shared a given number of times was roughly an inverse power of that number. For example, the likelihood of a meme being shared three times was approximately nine times less than that of its being shared once.

            even when we want to see and share high-quality information, our inability to view everything in our news feeds inevitably leads us to share things that are partly or completely untrue.

            even when people encounter balanced information containing views from differing perspectives, they tend to find supporting evidence for what they already believe. And when people with divergent beliefs about emotionally charged issues such as climate change are shown the same information on these topics, they become even more committed to their original positions.

            search engines and social media platforms provide personalized recommendations based on the vast amounts of data they have about users' past preferences. They prioritize information in our feeds that we are most likely to agree with—no matter how fringe—and shield us from information that might change our minds. This makes us easy targets for polarization.

            when people were isolated into “social” groups, in which they could see the preferences of others in their circle but had no information about outsiders, the choices of individual groups rapidly diverged. But the preferences of “nonsocial” groups, where no one knew about others' choices, stayed relatively stable.

            Social media follows a similar dynamic. We confuse popularity with quality and end up copying the behavior we observe.

            if everybody else is talking about it, it must be important. In addition to showing us items that conform with our views, social media platforms such as Facebook, Twitter, YouTube and Instagram place popular content at the top of our screens and show us how many people have liked and shared something. Few of us realize that these cues do not provide independent assessments of quality.

            programmers who design the algorithms for ranking memes on social media assume that the “wisdom of crowds” will quickly identify high-quality items; they use popularity as a proxy for quality.

            The first person in the social diffusion chain told the next person about the articles, the second told the third, and so on. We observed an overall increase in the amount of negative information as it passed along the chain—known as the social amplification of risk.

            Even worse, social diffusion also makes negative information more “sticky.” When Jagiello subsequently exposed people in the social diffusion chains to the original, balanced information—that is, the news that the first person in the chain had seen—the balanced information did little to reduce individuals' negative attitudes.

            during Spain's 2017 referendum on Catalan independence, social bots were leveraged to retweet violent and inflammatory narratives, increasing their exposure and exacerbating social conflict.

            Our simulations show that these bots can effectively suppress the entire ecosystem's information quality by infiltrating only a small fraction of the network. Bots can also accelerate the formation of echo chambers by suggesting other inauthentic accounts to be followed, a technique known as creating “follow trains.”

            Some manipulators play both sides of a divide through separate fake news sites and bots, driving political polarization or monetization by ads. At OSoMe, we recently on Twitter that were all coordinated by the same entity. Some pretended to be pro-Trump supporters of the Make America Great Again campaign, whereas others posed as Trump “resisters”; all asked for political donations.uncovered a network of inauthentic accounts

            users are more likely to share low-credibility articles when they believe that many other people have shared them.

            Speaks to the power of social signalling around information. What if it didn't just showed many other people shared, but the information track record around them? How many *credible* users shared?

            institutional changes are also necessary to curb the proliferation of fake news.

            One of the best ideas may be to make it more difficult to create and share low-quality information. This could involve adding friction by forcing people to pay to share or receive information. Payment could be in the form of time, mental work such as puzzles, or microscopic fees for subscriptions or usage. Automated posting should be treated like advertising.

            Friction! Friction by design will lead to a better internet. Of course, this is a problem when the main monetization scheme relies on minimal friction and maximum impulse.

            Free communication is not free. By decreasing the cost of information, we have decreased its value and invited its adulteration. To restore the health of our information ecosystem, we must understand the vulnerabilities of our overwhelmed minds and how the economics of information can be leveraged to protect us from being misled.

            As the saying goes, "free as in freedom", or "free, as in free beer?"

            is a professor of psychology and director of the Behavioral and Data Science master's program at the University of Warwick in England. His research addresses the evolution of mind and information.Thomas Hills

            Filippo Menczer, Luddy Distinguished Professor of Informatics and Computer Science, Indiana University.

            In a fascinating 2006 study involving 14,000 Web-based volunteers, Matthew Salganik, then at Columbia University, and his colleagues found that when people can see what music others are downloading, they end up downloading similar songs.

            On the surface, seems obvious. But look closer and it provokes profound questions about the homogenization of opinion and society in a social media-driven world.

            In 2017 we estimated that up to 15 percent of active Twitter accounts were bots—and that they had in the spread of misinformation during the 2016 U.S. election period.played a key role

            Has Musk reached out? :|

            5 minutes Engaged reading, read (10/12/20)

            thedrum.com |

            Is everything on the internet fake?

            This is an edited transcript of a talk that The Drum’s Promotion Fix columnist, Samuel Scott, recently gave at The CMO Network in the UK. The marketing industry had so many expectations of the digital world. For example, everything would be real, measured and trackable. But so many of those expectations turned out to be wrong. In short, much of what we see online has turned out to be completely and utterly fake. We hear so many people talking about being ‘digital first’ or ‘digital only’. But is that just completely and utterly wrong?

            13 minutes Engaged reading, read (05/24/21)

            "the company "blocked or removed" 56 million policy-violating reviews and nearly three million fake business profiles in 2020 alone" - and they're still barely scratching the surface! The scale is mind blowing. "it's implausible that of 71 reviewers for a downtown Toronto pizzeria, 50 would also have used that same lawn care company and 20 bought a wig from a single store in Vaughan, Ont." " as soon as they were posted to Riverbend's page, an obscure online marketing firm would get in touch with an offer to remove them — for a fee. "We'd get 14 [reviews] at midnight on a Saturday, we would get 10 in a row, you know, within an hour on a Wednesday night," he said. Sometimes Pereira felt like the review removal company, which sent him screenshots of newly posted negative reviews, was taunting him, he said. "It's definitely coordinated," he said." "None of this is illegal. Many companies selling fake reviews do so openly on social media platforms like Facebook. One such group found by CBC News is called, simply, Buy Google Reviews, and advertises packages starting from $5." "Dean believes that the fake review industry is able to exist because technology companies like Google aren't interested in solving the problem."

            cbc.ca |

            Why you can't believe everything you read on Google reviews | CBC News

            When Roman Abramovich, a Russian billionaire and owner of the English Premier League's Chelsea Football Club, appeared to have posted a Google review complaining that a Manitoba moving company lost three of his watches, Chris Pereira knew something was wrong. The oligarch had never been a customer at Riverbend Moving and Storage, a small business that offers residential and commercial moving services in Winnipeg. The review was fake, and fit a pattern that Pereira, the company's vice president of sales, had been observing for months — a slew of made-up complaints targeting the company's online reputation.

            6 minutes Engaged reading, read (01/19/22)

            There, for anyone to see on her public Facebook account, were all of the embarrassing moments from my childhood: The letter I wrote to the tooth fairy when I was five years old, pictures of me crying when I was a toddler, and even vacation pictures of me when I was 12 and 13 that I had no knowledge of. It seemed that my entire life was documented on her Facebook account, and for 13 years, I had no idea.

            I could understand why my mother would post these things; to our extended family and her friends they were cute, funny moments. But to me they were mortifying. Scrolling through my sister’s tweets, I saw what my sister had been laughing about. She would frequently quote me and the random things I would say, it seemed anything I had ever said to her that she thought was funny was fair game. Things I had no idea she was posting online.

            I had just turned 13, and I thought I was just beginning my public online life, when in fact there were hundreds of pictures and stories of me that, would live on the internet forever, whether I wanted it to be or not, and I didn’t have control over it. I was furious;

            I confessed that I felt like my privacy was violated, because I felt like they had no right to take pictures of me or quote me on their Facebook and Twitter accounts without my permission.

            They didn’t know I would get so upset over it, because their intentions weren’t to embarrass me, but to keep a log and document what their little sister/youngest daughter was doing in her early childhood and young teenage years.

            Teens get a lot of warnings that we aren’t mature enough to understand that everything we post online is permanent, but parents should also reflect about their use of social media and how it could potentially impact their children’s lives as we become young adults.

            Every October my school gave a series of presentations about our digital footprints and online safety. The presenters from an organization called OK2SAY, which educates and helps teenagers about being safe online, emphasized that we shouldn’t ever post anything negative about anyone or post unapproved inappropriate pictures, because it could very deeply affect our school lives and our future job opportunities.

            I also have a lot more opportunities to be social outside of the digital world, especially in middle school and entering high school now that there are more extracurriculars and clubs available for students to meet and socialize through.

            My friends are active social media users, but I think they are more cautious than they were before. They don’t share their locations or post their full names online, and they keep their accounts private. I think in general my generation has to be more mature and more responsible than our parents, or even teens and young adults in high school and college.

            We are more cautious than people have been before.

            For my generation, being anonymous is no longer an option. For many of us, the decisions about our online presence are made before we can even speak.

            This should be mandatory reading for every internet user. Haven't read something this eye-opening in a while. Great fodder for a true paradigm shift.

            fastcompany.com |

            I’m 14, and I quit social media after discovering what was posted about me

            This story is part of The Privacy Divide, a series that explores the fault lines and disparities—cultural, economic, philosophical—that have developed around digital privacy and its impact on society. “Ha, that’s funny!” My 21-year old sister would comment when she saw my mother’s most recent Facebook update or her latest tweet. Social media has been significant to my sister’s social life since she was 13, and she has constantly posted on Twitter and Facebook for nearly a decade.

            Gold Badge Earned

            Earned by
            Mario Vasilescu

            For
            Good Internet Citizenship