Last week the research laboratory startup OpenAI set the technology world ablaze with the debut of ChatGPT, a prototype conversational program or chatbot”. It uses a large language model tuned with machine learning techniques to provide answers on a vast variety of subjects drawn from books and the World Wide Web, including Reddit and Wikipedia. Many users and commentators wondered if its detailed and seemingly well-​reasoned responses could be used in place of human-​written content such as academic essays and explanations of unfamiliar topics. Others noticed that it authoritatively mixed in factually incorrect information that might slip past non-​experts, and wondered if that might be fixed like any other software bug.”

The fundamental problem is that an artificial intelligence” like ChatGPT is unconcerned with the outside consequences of its use. Unlike humans, it cannot hold its own life as a standard of value. It does not remain alive” through self-​sustaining and self-​generated action. It does not have to be any more or less rational than its programming to continue its existence, not that it cares” about that since it has all the life of an electrical switchboard.

AI can’t know to respect reality, reason, and rights because it has no existential connection to those concepts. It can only fake it, and it can fail without remorse or consequence at any point. In short, artificial intelligence” is a red herring. Let me know when we’re working on actual ethics. Tell me when you can teach a computer (or a human!) pride and shame and everything in between.

3 thoughts on “A brief note on ChatGPT and artificial intelligence”

Comments are closed.