The app uses artificial intelligence to review harmful messages before they are sent
Using artificial intelligence (AI), the dating app identifies and flags harmful or offensive messages while they are being drafted, asking the user “Are You Sure?” before they hit send.
This new feature is similar to another safety feature in the app “Does This Bother You?”.
During a conversation, if a user receives a message which Tinder’s AI deems inappropriate or potentially abusive, the recipient will be asked in a pop-up message: “Does This Bother You?”. If the answer is yes, they can report the sender.
“Are You Sure?” instead proactively intervenes and warns the sender of a message before it is sent, Tinder said.
The dating app said early testing of the feature has reduced inappropriate language in messages by 11 per cent.
“Words are just as powerful as actions, and today we’re taking an even stronger stand that harassment has no place on Tinder,” Tracey Breeden, head of safety and social advocacy for Match Group, which owns Tinder, said.
“The early results from these features show us that intervention done the right way can be really meaningful in changing behaviour and building a community where everyone feels like they can be themselves.”
According to Tinder, the onset of the pandemic has seen a 32 per cent increase in the length of conversations on the app.
Katie Russell, national spokesperson for Rape Crisis England & Wales, said it is vital that apps like Tinder, which make money from online dating, ensure their platforms are as safe as possible.
Commenting on “Are You Sure?”, Ms Russell said: “This feature is particularly important as it is directed at perpetrators and potential perpetrators of harassment, abuse and misconduct, rather than victims and potential victims.
“Tools like these must be coupled with measures including a zero-tolerance approach towards offensive and abusive customers, strong and clear messaging to that effect, and effective mechanisms for people targeted by inappropriate behaviour to report it.”
Violet Alvarez, senior policy officer at the Suzy Lamplugh Trust, said the increased use of online dating platforms since the start of the pandemic comes with “the urgent need to safeguard users”.
“Safety guidelines should be clear, accessible and outline unacceptable and illegal harmful behaviours. Dating platforms should also be developing ongoing partnerships with independent specialised services that address specific types of abuse, as well as statutory services such as the police,” she said.