Lifesaving Intervention or Rogue Research Experiment?

Facebook’s suicide prevention program may border on unchecked medical research, experts argue.

Facebook’s algorithm flags posts that potentially include suicidal threats. A team assesses imminent risk before contacting authorities who can deploy first responders for a “wellness check.”

John Torous, director of Beth Israel Deaconess Medical Center’s digital psychiatry division, says the program may constitute medical interventions without informed consent or any other research checks and balances. Facebook claims the technology is intended to help and is not diagnostic nor does it violate privacy rights.

“People may be vulnerable [or] in crisis, but that doesn’t mean you can abnegate basic ethical principles,” Torous says.


Secondary Topic
Comments +


Post a Comment

Restricted HTML

  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Back to top