Michael Laudor is hospitalized when he is 24 years old, as explained in Jonathan Rosen’s recent memoir The Best Minds. Laudor had had a psychotic break. He is diagnosed with schizophrenia.
But he is voluntarily hospitalized, which means that he can leave whenever he wants. Even if he is still experiencing psychosis. Even if his doctors have not yet found a medication that can help him. There are laws that protect people’s rights, unless they are deemed to be harmful to themselves or others.
Rosen writes:
“I knew nothing about commitment laws. . . . The psychiatrist Michael had been seeing . . . did not consider him violent. Michael had carried a knife, and slept with a baseball bat, because he thought his parents had been replaced by surgically altered Nazis who had murdered them and wanted to kill him. His psychiatrist considered that defensive behavior, not aggressive.”
This is, sadly, a bit of foreshadowing. (There is a lot of foreshadowing in this book.)
Everyone is different; many people with schizophrenia are not violent. Laws that protect people’s rights are a good thing. Michael’s situation is a difficult one of determining when a person might be violent before it is too late. Often that can’t be done unless the person has already harmed someone.
Freedom and safety often conflict. Now I am thinking about America’s debate on whether to mask up during the Covid pandemic, and whether to close schools. Is it better to have the freedom to live without burdensome and harmful restrictions, or the safety of containing the deadly virus?
Would you rather have freedom or safety?
I admit, that’s a trick question. Both freedom and safety are important.
But where do we draw the line?
There’s a super interesting book titled, AI 2041, written by Lee Kai-Fu and Qiufan Chen. One author is an AI researcher, the other a sci-fi writer, they both formerly worked at Google. In the book referenced, they layout 10 scenarios of what could reasonably occur and considering where technology is today. Because AI is expected to predict what may occur way before something actually happens, lends to some interesting moral dilemmas. Do we arrest someone before they commit a crime, because all the signals predicted they would? It’s the proverbial neighbor who admitted after the fact, “yeah, totally saw this coming. ” it’s predicted a non-human, AI assistant will be able to make these assessments far better Than any human could. The question is, do we really want that.
Yeah that’s a hard call. With humans or AI.