Roman Suzi
1 min readApr 3, 2023

--

Science is aware of the ethical side of the story since XX century or so. Sci-fi and Black mirror if not futurists explored all possible scenarios already. Why all the fuss now? Maybe, some big tech competitors or military complex want this pause to catch up?

Horse owners fear automobiles?

Fear of emerging capabilities? Laughable. Any human fool has those and is dangerous in the wrong place.

Societal consequences? Yes, this technology will bring them. Nothing new - see https://en.wikipedia.org/wiki/Amusing_Ourselves_to_Death of Neil Postman. Society is already on dangerous track with or without AI. But AI is needed to help out, so the stop is more dangerous, than development.

Safety protocols? How many victims there already are especially compared to victims of humans? Please, develop those protocols in parallel. Or where you were all these years?

All for more accurate, transparent, etc. But as far as I know AI is not yet deciding on it's own usage.

Policymaking. Freedom to compute comes to mind. Humanity has a lot of experience dealing with computing already (which AI is part of). However, mostly regulations are about parting freedom from users (for any number "good reasons").

In this call I see elites just do not want ordinary people to have the fruits. Like it was with aviation industry.

This hype comes and goes. There is much more benefits in having more intelligence than less sooner than later.

The only downside I see humans may become more stupid as AI takes what is left from intellectual routines. But it's already happening, so it has little to do with AI.

--

--

No responses yet