enic_arrow_drop_down_18px

Explora Articles Why we should not fear Artificial Intelligence

Articles

October 25, 2018 9 min

Why we should not fear Artificial Intelligence

The emergence of a new technology requires new rules: how do we deal with robots?

Lorena Jaume-Palasí

A content by Lorena Jaume-Palasí

Share

The idea of an automated society, filled with robots at home and at work, was one of the Utopias – and Dystopias – that emerged in literature in response to the introduction of automated systems. In the early 20th century, the use of cars and traffic lights brought automation to street level. Since then, the number of machines and automatic processes in our lives has increased exponentially: washing machines, cash dispensers, camera lenses, doors, car washes, thermostats… The initial fear that these things provoked in their day has passed and they are now somewhat routine. Automation is so run of the mill that we often don’t even notice it when we meet it.

Nevertheless, Artificial Intelligence (AI) is not the same as automatic machines. AI is a form of advanced automation. In conventional devices, very exact, programmed rules are created that enable a machine to perform specific tasks. The efficiency depends on the detail and exactitude with which it was programmed for the task: for example, tracing the shortest route from Seville to Madrid. What AI makes possible is a more abstract form of automation. This means tracing the fastest route from Seville to Madrid, taking into account road works, the number of traffic lights, rush hour and periods of foreseeably intense traffic, or meteorological conditions. Programming therefore centres on the creation of rules with which to measure efficiency in a specific context, and in the development of parameters for action. By following these rules, “intelligent” automated systems choose the most efficient process. This level of abstraction is a milestone in the history of technology.

These achievements are both impressive and scary at the same time. A lack of familiarity with AI makes it seem like magic, and leads us to reopen old debates. Is this technology intelligent? Does it have feelings or free will? Is it capable of evil and premeditation? Who is responsible when the system has unexpected and undesirable effects? Will it change the nature of human beings? What are the risks? Do we need new rules?

These are precisely the same questions that were debated in the tribunals of many countries following the commercialization of the automobile at the beginning of the 20th century. The fact that the same doubts and questions that surrounded the introduction of a new form of transport are being resurrected a century later with the arrival of AI demands a revisit of the debates of old. From a regulatory point of view, three points are worthy of our attention.

1. The technology only seems to be intelligent and human because it is not in common use

The commercialisation of cars was, in its day, an event excitedly heralded by all social classes. The automobile as a means of transport promised an efficient and hygienic future for cities where the streets were infested with horse dung. Within a few years the question had turned on its head, and cars had become the new urban plague. Demonstrations protesting against threats to safety were common in the 1920s: real wrecks from automobile accidents were paraded through the streets, with bloodied mannequins and Satan at the wheel. In the streets of Washington and New York demonstrations were organised with over 10,000 children dressed as ghosts, symbolising the annual figures for deaths in traffic accidents.

Within a few years, the car had become the object of a fundamental ethical debate that went all the way to the Courts. The Provincial Court of Georgia, for example, held an intense debate about the moral character of an automobile. In the verdict, the hearing concluded that although vehicles were not exactly malevolent, “they should be classified as dangerous wild animals”. Thus, the same rules were to be applied as for the ownership of exotic animals.

Obviously, over time, contact and familiarity with the new vehicles weakened the theories that humanised these machines, attributing them with diabolical motives and intent. The legal and ethical debate came to centre on the behaviour of the human beings on the street and behind the wheel.

This aspect of the discussion, which at first glance seems philosophical in its nature, had clear legal consequences. The existence of any responsibility to be attributed to the machine, as though it were an intelligent being, was discarded. Retrospectively, it would not only seem ridiculous to suggest the contrary, but it would have been a real challenge for ethics and law if it ever came to creating rules and punishments that were viable for both human beings and machines.

The debate around AI on this issue has the same implications, and requires us to consider the same ethical and legal consequences. Does a robot have intentions that would justify making it a legal entity in its own right? To what extent can responsibility fall to the machine, exonerating all human beings? How can sanctions be applied to a machine?

Artificial intelligence and its algorithms are not neutral. They reflect the intentions and involuntary biases of the teams of programmers and data scientists and the bodies involved in the implementation of this technology.

Artificial Intelligence and its methods of statistical analysis do not have intentions let alone free will. Artificial Intelligence is not intelligent. It is incapable of having ambitions or self-interest, or of deceiving or lying. In other words, artificial intelligence should not scare us any more than statistics. This does not mean it is harmless. Artificial intelligence and its algorithms are not neutral. They reflect the intentions and involuntary biases of the teams of programmers and data scientists and the bodies involved in the implementation of this technology.

Very transparent protocols can be established for AI that make it possible to determine any modifications made by people, independently of how complex the algorithms with which the technology is operated may be. There is no motive at all for creating a specific legal body to deal with artificial intelligence. The technology itself allows responsibility to be assigned for any failing or abuse by a specific person. Indeed, it makes it clearer and easier than before.

Both the driver who operates the artificial intelligence, and the pedestrian exposed to the traffic risk, can be identified

2. Ethics and Law should be neutral about technology

Going back to the dilemmas generated by the arrival of cars a century ago, it was crucial, at that time, to focus the debate (both ethical and legal) on human beings, in order to formulate practicable and applicable laws. Nevertheless, the creation of a legal and regulatory system to adjudicate rights and obligations could only be legitimate if it had a clear understanding of the risks and the actors involved. It took the Tribunals and society a long time to understand both the technical aspects of the car and the problems posed by traffic.

The first regulatory attempts seem grotesque to us today, not least because they imposed obligations on actors incapable of exercising adequate control over the machine. In the United Kingdom, for example, a driver was required to notify the Sheriff before circulating in any municipality. Armed with red flags, the Sheriff would then walk in front of the car as a warning to pedestrians.

The legal system that attempted to regulate traffic attributed responsibility exclusively to the driver. However, on those days the streets were notable for their unpredictability: traffic signs had not yet been invented, children played in the road, horse-drawn carriages bolted at the sound of motors, and pedestrians were incapable of calculating the speed at which cars were approaching. All this made assigning full responsibility to the driver disproportionate. From a physiological point of view, it was impossible to react faced with so many unforeseen events.

Pragmatism and a sense of social justice lead the Canadian, James Couzens, to invent a system of signs and rules of the road to coordinate pedestrians and drivers. Couzens resigned from his post as Vice President of finance at Ford and began to work for Detroit City Hall (USA), the global automobile capital of the time. Cigar in hand, Couzens revolutionised transport infrastructures. In the first place, he identified the situations in which responsibility fell to the pedestrian, and he created zones and signs for crossing the street.

Initial resistance from society was fierce, and rules and obligations for pedestrians were not exempt from controversy: Councillor Sherman Littlefield called them degrading, as they “treated citizens like livestock”. Couzens would not be intimidated, however, and he imposed his rules by decree. Time proved him right, as the effectiveness of his proposal was demonstrated, and it ultimately became the international model. Couzens was also responsible for creating a traffic control and management plan that could do without constant police presence if there were not enough man power. That was how Detroit became the birthplace for other revolutionary ideas like automatic traffic lights, emerging in the 1920s.

It is notable how little attention Couzens paid to the automobile as a technology in it’s own right when it came to conceiving his traffic regulations. His rules and limitations were not concerned with technical aspects, but rather with the use they were put to in public space. For example, measures to restrict speed did not prohibit the development of engines with more horse power, but rather limited the use of the accelerator by the driver. It is thanks to that, that the laws and rules established by Couzens did not have to be modified every time there was a development in the technology, as it was always possible to contextualise the new technology into the established social use it was put to. The fact that established traffic rules were technologically neutral is why, a century later, they are still in force, and in essence, are no less relevant today.

In the field of AI experts are studying laws and ethical principles that can be applied to programming code. One example is the “minimization of personal data”, by which the amount of personal data processed in order to offer a service or carry out a task should be kept to a minimum. This is a technical principle that is vitally important and affects information processing. On the one hand, this process safeguards the privacy of the persons implicated. On the other hand, the rule may, paradoxically, have implications for equality of treatment, because it does not take context into account. For example, until a little over a decade ago, studies into the use of beta blockers (a commonly used pharmaceutical in cardiology) were carried out using a database made up, in the most part, of white, European males. The conclusions of the studies were valid for this group, however, they proved not to be valid for women or other ethnic groups with different genetic variations. The lack of information about specific social groups generated a database that was biased from the start: the profile and characteristics of one part of the population is overrepresented and that will distort the calculation, giving an inaccurate impression of the whole. To assume that if less data is used, the risk of discrimination decreases is a myth. Depending on the context, more or less personal data will be required to avoid unintentional simplifications that lead us to discriminate against certain groups.

It should not be a question of regulating artificial intelligence as a technology, but of controlling what people make with it to society and others

These examples show that we need a change of strategy, because, until now, the debate about artificial intelligence has centred on the technical aspects. However, history demonstrates that it is possible to create laws and regulations around these new technologies that do not regulate the mathematical code itself. Ethics and law conventionally focus on the social context: their principles are not applied to the technical process, but to the social situation into which that process is integrated. It should not be a question of regulating artificial intelligence as a technology, but of controlling what people make with it to society and others.

3. Educating society to deal with new technologies does not require technical knowledge

With cars, Couzens saw that it was necessary to educate the citizenry in order to ensure that traffic rules permeated and were adopted by society. He was aware that there were extremely important aspects – such as learning to calculate the distance and speed of an automobile – that could only be resolved through the integration of the technology into everyday life and people’s familiarization with its use. Couzens did not believe it was necessary for people to understand the mechanisms of the automobile, beyond operational functions, such as braking, accelerating or changing a wheel. Both law and ethics are guided by the premise ultra posse nemo obligatur, a legal term which states that no one is obligated beyond what he or she is able to do. The knowledge necessary to understand the mechanisms of a car go beyond common sense, thus nobody should be obliged to understand them.

In the debate around AI a much greater technical competence is expected of the citizen. However, the dilemmas that the automobile generated at the start of the twentieth century prove that this type of discourse is not constructive. We do not need to know how an aeroplane works to get aboard one. We are not required to understand biochemistry before buying a yoghurt. No one is obliged to know more that what they are able to, which is to say, what is common sense.

AI enables us to highlight patterns of human behaviour and identify differences in behaviour between different groups (women, ethnic groups and social classes, among many others). Based on that, the team of people using the technology may decide to discriminate in a more or less legitimate way, offering different services or information, manipulating people’s attention and making different suggestions to different groups. Involuntary and implicit discrimination also needs to be taken into account, and that requires constant assessment of the technology. Experts should not only have an overall ethical sensitivity, but they should be particularly aware of involuntary discrimination that could result from bias in the design or in the databases from which the AI is operating.

The use of this technology to amplify or compensate for discrimination depends on the group of human beings who operate it. It is not the citizen who must understand the technical processes behind AI in order to be able to use it. It is the engineers, the data scientists, and the marketing departments and Governments who use or are responsible for regulating these technologies, who must understand the social and ethical dimensions of Artificial Intelligence.

****

Lorena Jaume-Palasí is founder of The Ethical Tech Society and a member of the Spanish Government’s Expert Panel (el Grupo de Sabios) on Artificial Intelligence and Big Data.

Did you like it?

Future for Work Institute operates on an annual subscription model that includes access to our calendar activities and knowledge repository resources, as well as in-company services.

  • Plan

    Curiosity

    Recommended for HR teams of between 5 and 20 people.

    Subscribe

  • Plan

    Pioneer

    Recommended for HR teams of between 15 and 100 people.

    Subscribe

  • Plan

    Exploration

    Recommended for HR teams of more than 100 people.

    Subscribe

  • Plan

    Horizons

    For more complex organizations.

    Subscribe

Contact us

Already Registered? Log in here