×

How LinkedIn stops InMail harassment

Tom Hazen, Medici Museum of Art manager, left, and Boy Scouts of America photographer Michael Roytek uncrate one of the BSA’s Norman Rockwell paintings after their arrival at the Howland museum in February 2020. Those paintings now could be sold as part of a bankruptcy proposal filed by BSA lawyers on Monday. The collection has been appraised at more than $100 million. ,,, Staff file photo / R. Michael Semple

Last week, LinkedIn announced a new process for blocking unwanted messages on its InMail platform. Targeting harassing messages has become a big concern for LinkedIn officials in recent years.

Even with this new initiative, it’s frustrating to hear that some professionals (particularly women) are dealing with this kind of behavior, of all places, on LinkedIn.

We used to liken LinkedIn to other social media platforms by calling it the “Facebook for professionals.” While this was true in the early days, the platform has morphed into much more, including the ability to research, build your skillset (LinkedIn Learning) and connect directly with other like-minded professionals (InMail).

InMail works similar to Facebook’s Messenger and other platforms with services that allow users to send messages, share files and do most of the same things we can do with traditional email. For LinkedIn users, this service is particularly useful for connecting job seekers and employers, but it’s also an important tool for building our professional networks — not just social networks.

Unfortunately, just like other social platforms, some users harass others on LinkedIn’s public feed. Unlike InMail, there’s a way to combat the public harassment. Because of the businesslike culture of the platform, posts that sink below civilized debate often violate LinkedIn’s unwritten social norms.

“We find that reported cases of harassment predominantly stem from private messages rather than the public feed,” said Grace Tang and her team at LinkedIn. Tang is an engineer focused on anti-abuse initiatives.

“In order for members to confidently engage in this community, they have to feel safe,” Tang added. “This sense of safety is at risk when spam, inappropriate, or harassing content is shared on the platform. This content is not tolerated on LinkedIn, and we have rolled out proactive and reactive measures that employ a combination of technology and human expertise to protect members.”

To help prevent these kinds of messages, LinkedIn’s strategy to limit harassment focuses on education (enforcement of policies), detection and support for affected members.

Detecting harassment is where machine learning comes into play. LinkedIn has developed models to detect possible harassment within InMail messages.

“These models work to protect the recipient by hiding potentially harassing messages while also giving the recipient the ability to unhide, view it and optionally report it,” Tang said.

Tang’s team found that harassing messages typically fall into three categories: romance scams, inappropriate advances, and targeted harassment.

Using data from messages in those categories, LinkedIn built a harassment detection system that can identify violating members and their harassing messages with high precision. This includes scoring the sender behavior, message content and interaction between the two members in the conversation.

“We apply these models in a sequence to minimize unnecessary account or message analysis by not proceeding with additional model scoring unless the previous model flags the traffic as suspicious,” Tang added.

Tang noted that the model continually evolves and learns from signals in other harassing messages.

To learn more, check out https://engineering.linkedin.com/blog.

Dr. Adam Earnheardt is a professor of communication at Youngstown State University. Follow him on Twitter at @adamearn and on his blog at www.adamearn.com.

NEWSLETTER

Today's breaking news and more in your inbox

I'm interested in (please check all that apply)
   

COMMENTS

Starting at $4.39/week.

Subscribe Today