A new social-media study by LinkedIn is raising concerns about data abuse—but might be straight-up customer research.
- A study conducted by LinkedIn, Harvard Business School, Stanford University, and MIT used data collected by the social-media platform between 2015 through 2019 from 20 million users.
- A causal test of the strength of weak ties studied how the site’s algorithm for suggesting new connections, to explore how recommending less-intimate acquaintances, can improve job mobility.
- “The authors analyzed data from multiple large-scale randomized experiments on LinkedIn’s People You May Know algorithm, which recommends new connections to LinkedIn members, to test the extent to which weak ties increased job mobility in the world’s largest professional social network,” reports the study.
- “There are weak ties, like friends of friends, and strong ties, like immediate colleagues. [This] research posits it’s those weak ties that can lead you to better job opportunities not found in your strong ties network,” says USA Today.
- The study shows positive results for the theory but the undisclosed nature of the experiments has users concerned about data abuse.
- “Some critics are claiming that LinkedIn gave some users a leg up while leaving others to languish—carefully improving their product but carelessly playing with people’s livelihoods,” says The Washington Post.
Why it’s news
The data collected by the study is quite useful for scientific purposes. It confirms the theory that social networks can help provide greater job mobility by recommending connections outside of immediate networks and friend groups.
“The findings help us understand how platform algorithms affect employment opportunities and outcomes and help LinkedIn design their platform to more effectively help its members find jobs and achieve social and economic mobility,” says study author Sinan Aral.
The potential for abuse creates concerns. LinkedIn’s actions aren’t illegal—as they are allowed to use data within their terms of service.
As we reported yesterday, many major companies like Amazon have faced similar criticism for their implementation of stringent algorithmic software that creates concerns about how it can be ethically used. Major social media networks like Facebook and TikTok have faced widespread accusations and criticism for their abusive data mining practices.
“The aphorism declaring us these sites’ product comes from the basics of online advertising: Facebook is ‘free’ only because the platform markets users’ attention to businesses trying to sell us stuff. As long as sites are aiming to win from us something as wrapped up in our minds as attention, we’re going to continue to feel like test subjects, too. Either this is all a moral disaster, or it’s exactly what we signed up for,” says The Washington Post.