Specific definitions can be given by focusing either on the internal processes of reasoning or on the external behavior of the intelligent system and using as a measure of effectiveness or the similarity with human behavior or with an ideal behavior, called rational:
Act in a similar way to what human beings do: the result of the operation performed by the intelligent system is indistinguishable from that performed by a human.
Thinking in a similar way to what human beings do: the process that leads the intelligent system to solve a problem is similar to the human one. This approach is associated with cognitive science.
Thinking rationally: the process that leads the intelligent system to solve a problem is a formal procedure that goes back to logic.
Act rationally: the process that leads the intelligent system to solve the problem is the one that allows it to obtain the best expected result given the information available. Artificial intelligence is a discipline debated among scientists and philosophers as it manifests ethical as well as theoretical aspects and practical. Stephen Hawking in 2014 warned of the dangers of artificial intelligence, considering it a threat to the survival of humanity.
There were many steps that led to the birth of this discipline. The first, both in terms of importance and chronological order, is the advent of computers and the continuing interest in them. Already in 1623, thanks to Willhelm Sickhart, it was possible to create machines capable of performing mathematical calculations with numbers up to six digits, even if not autonomously. In 1642 Blaise Pascal built a machine capable of doing operations using automatic carry, while in 1674 Gottfried Wilhelm von Leibniz created a machine capable of adding, differentiating and multiplying recursively. Between 1834 and 1837 Charles Babbage worked on a model of a machine called the analytical engine, whose characteristics partly anticipated those of modern computers. In the twentieth century, the focus on computers returned to light: in 1937, for example, Claude Shannon, at Yale University, showed how Boolean algebra and binary operations could represent the circuit change within telephones.
A further important step was the article by Alan Turing written in 1936, On Computable Numbers, With An Application To The Entscheidungsproblem, which lays the foundations for concepts such as computability, computability, Turing machine, cardinal definitions for computers up to the present day. . Later, in 1943 McCulloch and Pitts created what is believed to be the first work on artificial intelligence. This system employs a model of artificial neurons in which the state of these neurons can be "on" or "off," with a switch to "on" in the presence of stimuli caused by a sufficient number of surrounding neurons.
McCulloch and Pitts thus came to show, for example, that any computable function can be represented by some network of neurons, and that all logical connectives ("and", "or", ...) can be implemented by a simple structure neural.
Seven years later, in 1950, two Harvard University students, Marvin Minsky and Dean Edmonds, created what is recognized as the first artificial neural network known as SNARC.
The actual birth of the discipline (1956)
In 1956, in New Hampshire, at Dartmouth College, a conference was held which was attended by some of the leading figures of the nascent field of computation dedicated to the development of intelligent systems: John McCarthy, Marvin Minsky, Claude Shannon and Nathaniel Rochester. At McCarthy's initiative, a team of ten