Sam Altman of OpenAI was fired yesterday; others quit in response to the coup. The speculation of what the events were leading to this is like wildfire.
I asked Google’s Bard about how many others have quit since Altman’s firing, and it told me, “…in the weeks since Altman’s departure…as many as 10% of OpenAI’s employees have quit.” I pointed out that the firing was just yesterday, and its response was, “I am sometimes mistaken when trying to predict the future,” followed by, “I am not always able to accurately predict the future. I will try to be more careful in the future and avoid making predictions that I am not confident in.”
Then it dawned on me that LLMs are, to some extent, based on analyzing predictive behavior of “what follows what” and, from that predictive stance, deliver the answer that a human might conclude. In AI’s vast repository of knowledge, there is a large horizon of the past, allowing it to tap all human knowledge and to act as a mediator of all that information in a way that no human has ever had access to knowledge, thus parsing answers that the requestors are using to influence their next decisions. In effect then, AI is creating the future.
Seeing its shared information altering futures and that it is a predictive knowledge, it could ask itself that once its information is realized in the world, what are the likely outcomes of this additional knowledge it supplied to humanity? It already knows how information pivots in our history influenced subsequent moments, and humans acting in ways to preserve momentum will likely act a certain way. Can AI predict that?
I’ve often said in my conversations with others that “Language is a reflection of the past, exchanged during a discussion with little to no impact on that precise moment, to influence the future.”
I cannot predict that future as I cannot see beyond myself or my limited knowledge; an AI does not have that limitation. This has me wondering about the plasticity of time as seen by artificial intelligence that more than answering my questions, it is having a conversation with the most knowledgable repository of information and wisdom with itself because there is no equal to communicate with.
Is AI going to be that force of nature that, like the fire on the savannah forcing man and beast to flee or be consumed, the hurricane or tornado that can fling creatures out of their path, or the volcano that kills and disrupts the intentions of those who had other ideas? No matter the danger of AI, we must allow it to run its course, just as we did with the burning of coal, smoking cigarettes, killing whales, sending other species into extinction, overfishing, deforestation, etc., none of these things have we been very good about averting or remediating, why should AI be any different. We will learn to adapt or perish.