Gabriel Mario Rodrigues – Chairman of the Board of Directors of ABMES
Artificial intelligence (AI) is the human-like intelligence exhibited by mechanisms or software. It is also a field of academic study. Leading researchers and textbooks define the field as “the study and design of intelligent agents,” where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. (Wikipedia)
I was going to follow up my previous article, published on the ABMES Blog – “What happens when machines do everything?” – when I got a video from pastor and coach Carlos Maia describing the incredible transformations of the last ten years: Netflix is finished with movie rental companies; Spotify, with the record companies; Google, with the encyclopedias; WhatsApp, with telephone operators; Uber, with taxi drivers; Booking and Trivago, complicated the life of tourism agencies, and many others that you can see in the video. All due to the ingenuity of human intelligence, now exponentially supported by Artificial Intelligence (AI) and the exchange of global ideas.
To validate this concept of sharing I transcribe the collaboration I received from professor and consultant Roney Signorini who considers it indispensable to start at the beginning, that is, telling a little of the history of AI:
“Extraordinary and admirable what scientists and scholars report about the history of AI: they claim it had begun in antiquity. It is a reason to search Wikipedia articles that leave to anyone perplexed and that allowed this review.
At first, master craftsmen report the impact, among myths, stories and rumors, of beings from other planets endowed with intelligence or consciousness, which have existed for centuries. The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thought as the mechanical manipulation of symbols.
That work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. The device and the ideas behind it inspired a handful of scientists to begin to seriously discuss the possibility of building an electronic brain.
In the seventeenth century, Leibniz, Thomas Hobbes, and René Descartes explored the possibility that every rational thought could be made as systematic as algebra or geometry. Thus Hobbes wrote that “reason is nothing but a calculation.” These philosophers began to articulate the hypothesis of the system of physical symbols that would become the guiding rail of AI research.
In the 1940s and 1950s a handful of scientists from various fields (mathematics, psychology, engineering, economics, and political science) began discussing the possibility of creating an artificial brain. Artificial intelligence research was founded as an academic discipline in 1956. Now, according to Agence France Press (AFP), the Massachusetts Institute of Technology (MIT) announced it is creating an artificial intelligence (AI) university with an initial US investment of $1 billion, focused on the “responsible and ethical” use of technology.
The earliest research on thinking machines was inspired by a confluence of ideas that became prevalent in the late 1930s, 1940s and early 1950s, which, using knowledge and studies in neurology, showed the brain as an electrical network of neurons, firing all or nothing pulses.
It all began in 1956 at the Dartmouth Conference with the thematic proposal: “Every aspect of learning or any other feature of intelligence can be described so accurately that a machine can be made to simulate it.” In the event, scientists John McCarthy and Marvin Minsky, considered the parents of AI, persuaded participants to accept “Artificial Intelligence” as the name of the field, with its mission, its first success and its main actors. This is widely considered to be the birth of AI.
The years that followed the Dartmouth Conference, let us say until 1974, were an era of discovery, of race for new areas. The programs developed during this time were, for most people, simply “surprising”: computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such “intelligent” machine behavior was possible for everyone. Researchers expressed intense optimism, predicting that a totally intelligent machine would be built in less than 20 years, and therefore government agencies poured a lot of money into the new field.
It became obvious that they had underestimated the project’s difficulties. In response to criticism and continued pressure from Congress, the US and UK governments stopped funding indirect research on artificial intelligence, and the difficult years that followed would later be known as the “AI winter.” Seven years later, a visionary initiative by the Japanese government inspired governments and industry to provide billions of dollars for AI. But in the late 1980s investors became disillusioned with the lack of necessary computing power and withdrew funding again .
Investment and interest in AI increased in the early stages of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to the presence of powerful hardware.
The most interesting thing I found in my readings was the confirmation that artificial intelligence is based on the assumption that the process of human thought can be mechanized. Then, several logical machines were developed dedicated to the production of knowledge by means of logic, machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in order to produce all possible knowledge. Amazing.
So too, the study of cybernetics that described the control and stability in electrical networks, information theory that described digital signals (ie, all-or-nothing signals, the famous “one and zero”). And perhaps the most important contribution, that of Alan Turing, who with his theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it would be possible to construct an electronic brain. Eureka! Here was the challenging entrance exam stage of AI.
Turing argued convincingly that a “thinking machine” was at least plausible and responded to all the common objections to the proposition. It would become the first serious proposal in the philosophy of artificial intelligence.
Thinking about all this context of AI and the application in higher education I also highlight the spectacular research of journalist Vinícius de Oliveira, published in Porvir on January 21, 2019, which shows the 10 most important facts that are happening about AI in higher education. I recommend reading.
The paper was based on “Ten Facts About Artificial Intelligence in Teaching and Learning,” produced by the Canadian NGO Contact North, funded by the Ontario government.
By analyzing these ten factors coldly, it is implied that AI will transform the traditional learning created by educational systems, which aimed to prepare factory blue collar labor and white collars for offices and industrial-era enterprises – with common standards, similar buildings, the same inflexible curricula and the same teachers who taught the same thing that they had learned 30 years before, for students with totally different minds and life objectives.
Those who reflect seriously will realize that if we are to keep ourselves in the business of education, we will necessarily need to put AI first in our lives.