The AI Act and a (sorely missing!) right to AI individualization; Why are we building Skynet?

Published in the European Law Blog, 16.07.2024

The industry has tricked us; Scientists and regulators have failed us. AI is developing not individually (as humans become individuals) but collectively. A huge collective hive to collect, store and process all of humanity’s information; a single entity (or a few, interoperability as an open issue today as their operation itself) to process all our questions, wishes and knowledge. The AI Act that has just been released ratifies, for the moment at least, this approach: EU’s ambitious attempt to regulate AI deals with it as if it was simply a phenomenon in need of better organisation, without granting any rights (or participation, thus a voice) to individuals. This is not only a missed opportunity but also a potentially risky approach; while we may not be building Skynet as such, we are accepting an industry-imposed shortcut that will ultimately hurt individual rights, if not individual development per se.

This mode of AI development has been a result of short-termism: an, immediate, need to get results quickly and to make a ‘fast buck’. Unlimited (and unregulated, save for the GDPR) access to whatever information is available for processing obviously speeds things up – and keeps costs down. Data-hungry AI models learn faster through access to as-large-as-possible repositories of information; then, improvements can be fed into next-generation AI models, that are even more data-hungry than their predecessors. The cycle can be virtuous or vicious, depending how you see it.

In 1984 iconic film The Terminator humans fought against Skynet, “an artificial neural network-based conscious group mind and artificial general superintelligence system”. Skynet was a single, collective intelligence (“group mind”) that quickly learned everything that humans knew and controlled all of the machines. Machines (including, Terminators) did not develop independently, but as units within a hive, answering to and controlled by a single, omnipresent and omnipotent entity – Skynet.

Isn’t this exactly what we are doing today? Are we not happy to let Siri, Alexa, ChatGPT (or whatever other AI entity the industry and scientists launch) process as a single entity, a single other-party with which each one of us interacts, all of our information through our daily queries and interactions with them? Are we not also happy to let them control, using that same information, all of our smart devices at home or at the workplace? Are we not, voluntarily, building Skynet?

But, I do not want to be talking to (everybody’s) Siri!

All our AI end-user software (or otherwise automated software assistants) is designed and operates as a single, global entity. I may be interacting with Siri on my iPhone (or Google Assistant, Alexa, Cortana etc.), asking it to carry out various tasks for me, but the same do millions of other people on the planet. In essence, Siri is a single entity interacting simultaneously with each one of us. It is learning from us and with us. Crucially, however, the improvement from the learning process goes to the one, global, Siri. In other words, each one of us is assisted individually through our interaction with Siri, but Siri develops and improves itself as a one and only entity, globally.

The same is the case today with any other AI-powered or AI-aspiring entity. ChatGPT answers any question or request that pops in one’s mind, however this interaction assists each one of us individually but develops ChatGPT itself globally, as a single entity. Google Maps drives us (more or less) safely home but at the same time it catalogues how all of us are able to move on the planet. Amazon offers us suggestions on books or items we may like to buy, and Spotify on music we may like to listen to, but at the same time their algorithms learn what humans need or how they appreciate art.

Basically, if one wanted to trace this development back, they would come across the moment that software transformed from a product to a service. In the beginning, before prevalence of the internet, software was a product: one bought it off-the-shelf, installed it on their computer and used it (subject to the occasional update) without having anything to do with the manufacturer. However, when each and every computer and computing device on the planet became interconnected, the software industry, on the pretence of automated updates and improved user experience, found an excellent way to increase its revenue: software became not a product but a service, payable in monthly instalments that apparently will never stop. Accordingly, in order to (lawfully) remain a service, software needed to remain constantly connected to its manufacturer/provider, feeding it at all times with details on our use and other preferences.

No user was ever asked about the “software-as-a-service” transformation (governments, particularly from tax-havens, happily obliged, offering tax residencies for such services against competitive taxation). Similarly, no user has been asked today whether they want to interact with (everybody’s) Siri. One AI-entity to interact with all of humanity is a fundamentally flawed assumption. Humans  act individually, each one at their own initiative, not as units within a hive. The tools they invent to assist them they use individually. Of course it is true that each one’s personal self-improvement when added up within our respective societies leads to overall progress, however, still, humanity’s progress is achieved individually, independently and in unknown and frequently surprising directions.

On the contrary, scientists and the industry are offering us today a single tool  (or, in any case, very few, interoperability among them still an open issue) to be used by each one of us in a recordable and processable (by that tool, not by us!) manner. This is unprecedented in humanity’s history. The only entity so far to, in its singularity, interact with each one of us separately, to be assumed omnipresent and omnipotent, is God.

The AI Act: A half-baked GDPR mimesis phenomenon

The biggest shortcoming of the recently published AI Act, and EU’s approach to AI overall, is that it deals with it only as a technology that needs, better, organisation. The EU tries to map and catalogue AI, and then to apply a risk-based approach to reduce its negative effects (while, hopefully, still allowing it to, lawfully, develop in regulatory sandboxes etc.). To this end the EU employs organisational and technical measures to deal with AI, complete with a bureaucratic mechanism to monitor and apply them in practice.

The similarity of this approach to the GDPR’s approach, or a GDPR-mimesis phenomenon, has already been identified. The problem is that, even under this overly protective and least-imaginative approach, the AI Act is only a half-baked GDPR mimesis example. This is because the AI Act fails to follow the GDPR’s fundamental policy option to include the users (data subjects) in its scope. On the contrary, the AI Act leaves users out.

The GDPR’s policy option to include the users may appear self-evident now, in 2024, however it is anything but. Back in the 1970s, when the first data protection laws were being drafted in Europe, the pendulum could have swinged towards any direction: legislators may well have chosen to deal with personal data processing as a technology only in need of better organisation, too. They could well have chosen to introduce only high-level principles on how controllers should process personal data. However, importantly, they did not. They found a way to include individuals, to grant them rights, to empower them. They did not leave personal data processing only to organisations and bureaucrats to manage.

This is something that the AI Act is sorely missing. Even combined with the AI Liability Directive, still it leaves users out of the AI scene. This is a huge omission: users need to be able to participate, to actively use and take advantage of AI, and to be afforded with the means to protect themselves from it, if needed.

In urgent need: A (people’s) right to AI individualisation

It is this need for users to participate in the AI scene that a right to AI individualisation would serve. A right to AI individualisation would allow users to use AI in the way each one sees fit, deliberately, unmonitored and unobserved by the AI manufacturer. The link with the provider, that today is always-on and feeds all of our innermost thoughts, wishes and ideas back to a collective hive, needs to be broken. In other words, we only need the technology, the algorithm alone, to train it and use it ourselves without anybody’s interference. This is not a matter simply of individualisation of the experience on the UX end, but, basically, on the backend.-The ‘connection with the server’, that has been forced upon us through the Software-as-a-Service transformation, needs to be severed and control, of its own, personalised AI, should be given back to the user. In other words,  We need to be afforded the right to move from (everybody’s) Siri to each one’s Maria, Tom, or R2-D2.

Arguably, the right to data protection serves this need already, granting us control over processing of our personal data by third parties. However, the right to data protection involves  the, known, nuances of, for example, various legal bases permitting the processing anyway or technical-feasibility limitations of rights afforded to individuals. After all, it is under this existing regulatory model, that remains in effect, that today’s model of AI development was allowed to take place anyway. A specific, explicitly spelled-out right to AI individualisation would address exactly that; closing existing loopholes that the industry was able to take advantage of, while placing users in the centre.

A host of other considerations would follow the introduction of such a right. Principles such as data portability (art. 20 of the GDPR), interoperability (art. 6 of EU Directive 2009/24/EC) or, even, a right to be forgotten (art. 17 of the GDPR) would have to be revisited. Basically, our whole perspective would be overturned: users would be transformed from passive recipients to active co-creators, and AI itself from a single-entity monolith to a billion individualised versions, same as the number of the users it serves.

As such, a right to AI individualisation would need to be embedded in systems’ design, similar to privacy by-design and by-default requirements. This is a trend increasingly noticeable in contemporary law-making: while digital technologies permeate our lives, legislators find that sometimes it is not enough to regulate the end-result, meaning human behaviour, but also the tools or methods that led to it, meaning software. Soon, software development and software systems’ architecture will have to pay close attention to (if not be dictated by) a large array of legal requirements, found in personal data protection, cybersecurity, online platforms and other fields of law. In essence, it would appear that, contrary to an older belief that code is law, at the end of the day (it is) law (that) makes code.

Ευρωπαϊκοί κανόνες για την Τεχνητή Νοημοσύνη

Δημοσιεύθηκε στην Οικονομική Επιθεώρηση, 23.06.2023

Πριν λίγες μέρες το Ευρωπαϊκό Κοινοβούλιο αποφάσισε για την οριστική του θέση στο Σχέδιο Νόμου για την Τεχνητή Νοημοσύνη. (Θεωρώ ότι η ελληνική μετάφραση του « ως «Πράξη», αντί για «Νόμος», «για την Τεχνητή Νοημοσύνη», είναι αποτυχημένη.) Το Σχέδιο Νόμου είχε συνταχθεί εδώ και λίγα χρόνια από την Ευρωπαϊκή Επιτροπή. Όπως προβλέπει η σχετική διαδικασία, την πρόταση της Επιτροπής επεξεργάζονται, χωριστά, το Κοινοβούλιο και το Συμβούλιο. Το Συμβούλιο έχει ήδη αποφασίσει για τη δική του θέση εδώ και λίγους μήνες, επομένως τώρα αρχίζει ο, λεγόμενος, «τρίλογος» μεταξύ των τριών οργάνων, δηλαδή οι εσωτερικές διαπραγματεύσεις «κεκλεισμένων των θυρών», ώστε να καταλήξουν στο οριστικό κείμενο.

Αυτό που έγινε κάπως διαφορετικά αυτή τη φορά είναι ότι ο τρίλογος άρχισε την ίδια μέρα με την απόφαση του Κοινοβουλίου. Αυτό είναι ενδεικτικό της πίεσης χρόνου, ενόψει των Ευρωεκλογών του 2024: από τις αρχές του νέου έτους οι νομοθετικές εργασίες θα τερματιστούν, και, μετά τις εκλογές, ποιος ξέρει αν το νέο Κοινοβούλιο και η Επιτροπή θα έχουν τις ίδιες προτεραιότητες, και απόψεις, όπως σήμερα. Συνεπώς, υπάρχει ασφυκτική πίεση χρόνου να ολοκληρωθούν όλα σύντομα.

Σε κάθε περίπτωση, μόλις όλα τελειώσουν η Ευρώπη θα είναι η πρώτη στον κόσμο που θα αποκτήσει νόμο για τη ρύθμιση της Τεχνητής Νοημοσύνης. Αυτή η εξέλιξη ούτε βιαστική ούτε απρόσμενη ήταν, παρά τη χρονική πίεση αυτής της περιόδου: Η Ευρωπαϊκή Επιτροπή που διορίστηκε το 2019 έθεσε ως προτεραιότητα τον «ψηφιακό μετασχηματισμό», και νομοθέτησε με συνέπεια και συνέχεια αντίστοιχα. Οι εργασίες πάνω στην Τεχνητή Νοημοσύνη είχαν ξεκινήσει ακόμα νωρίτερα, ήδη από το 2017. Με άλλα λόγια, ο ευρωπαϊκός Νόμος για την Τεχνητή Νοημοσύνη δεν είναι αποτέλεσμα των ημερών, και του ενδιαφέροντος που δημιούργησε, για παράδειγμα, το ChatGPT.

Αυτό που όμως ήταν αποτέλεσμα των ημερών ήταν οι νομοθετικές παρεμβάσεις στο αρχικό κείμενο της Επιτροπής τόσο από το Συμβούλιο όσο και από το Κοινοβούλιο. Στην ουσία, καθένας τους «εμπλούτισε» τις διατάξεις της Επιτροπής από το 2019 με τεχνολογίες των ημερών, δηλαδή του 2023. Έτσι, στις προτάσεις τους αντιμετωπίζονται ειδικά οι τεχνολογίες πίσω από το ChatGPT ή άλλες τεχνολογίες Τεχνητής Νοημοσύνης που έκαναν αισθητή την παρουσία τους πρόσφατα.

Ποιο είναι το πρόβλημα με αυτό; Ότι χάνεται η γενικότητα του νόμου και ο νομοθέτης φαίνεται να τρέχει, ασθμαίνοντας, πίσω από ό,τι νέο βγάζει κάθε φορά η βιομηχανία πληροφορικής Αυτή η στάση, αν και λύνει άμεσα προβλήματα, δεν οδηγεί ούτε σε μακροπρόθεσμες λύσεις (αν σκεφτεί κανείς ότι η προηγούμενη Οδηγία για τα προσωπικά δεδομένα ήταν του 1994 και εκείνη για το ηλεκτρονικό εμπόριο του 2000) ούτε, τελικά, δημιουργεί ασφάλεια δικαίου.

Ανεξαρτήτως των παραπάνω, αφού η Ευρώπη πρώτη νομοθετεί, τις λύσεις της θα είναι εκ των πραγμάτων αναγκασμένη να ακολουθήσουν τόσο οι ΗΠΑ όσο και η Κίνα. Αυτό συνέβη άλλωστε και σε άλλους τομείς ρύθμισης των ψηφιακών τεχνολογιών, όπως για παράδειγμα στις μεγάλες online πλατφόρμες ή, παλαιότερα, στον GDPR. Δημιουργείται έτσι το λεγόμενο «Brussels effect», όπου τους νόμους της Ευρώπης αντιγράφουν, στην ουσία, όλοι οι άλλοι.

Όμως, υπάρχει ένας «ελέφαντας μέσα στο δωμάτιο» όταν η Ευρώπη μιλά για τη ρύθμιση της Τεχνητής Νοημοσύνης. Αυτό το κοινό μυστικό αφορά στο γεγονός ότι οι αντίστοιχες τεχνολογίες δεν παράγονται στην Ευρώπη αλλά στις ΗΠΑ και την Κίνα. Στην ουσία, η Ευρώπη μόνο τις χρησιμοποιεί – και αν κανείς δεν πείθεται αν σκεφτεί ποιας «εθνικότητας» είναι οι εφαρμογές Τεχνητής Νοημοσύνης που γνωρίζει (όπως άλλωστε και οι online πλατφόρμες, όμως αυτό είναι μια άλλη συζήτηση).

Έτσι, επομένως, εξηγείται το κόμμα στον τίτλο αυτού εδώ του κειμένου. Ο πλήρης τίτλος του θα ήταν «Ευρωπαϊκοί κανόνες για την Τεχνητή Νοημοσύνη, που όμως δεν παράγεται στην Ευρώπη». Στην ουσία η Ευρώπη νομοθετεί για τις τεχνολογίες τρίτων, των ΗΠΑ και της Κίνας. Είναι και αυτό ένα ακόμα αποτέλεσμα της παγκοσμιοποίησης. Αν θα αποδειχτεί για το καλύτερο ή το χειρότερο της Ευρώπης, των Ευρωπαίων και της τεχνολογίας μένει να αποδειχτεί στο μέλλον.