Two of the engineering law solutions for Football 4.0. are plain for all to perceive. At first glance, the regulatory framework must be adapted, more specific, and of multidisciplinary nature. Furthermore, European policymaking and jurisprudence on personal data protection should be grounded primarily on its usage and impact, and secondarily, on its source.
AI-based data ball championship of a new era in elite football
Over the last two decades, Electronic Performance and Tracking Systems (such as accelerometers, gyroscopes, RFID devices, CCTV, among others) have collected large amounts of data subjects’ predictors. Nowadays, mega sets indexed to several individuals and embedded into the input layers of Artificial Intelligence (AI) models are tested as Decision Support Systems (DSS) to assess forthcoming footballers’ prominence or stability on the field, hence, assisting coaches or general managers in making short or medium-term evaluations. AI assessments can soon portray the driving force to predict match results better, evaluate athletes’ performance, prevent and predict injuries, or even estimate more accurately players’ value on the transfer market. For instance, Benfica’s data-driven solutions, exploring AI with the expectation to build a competitive football powerhouse, use insights gleaned from sensor-laden pitches to develop personalized training programs, improving the team’s level of competitiveness accordingly.
Albeit fascinating, these computational tasks do not pose intuitive or straightforward computational problem-solving. Instead, they have turned into one of the most difficult challenges that cannot yet adequately – and therefore, with acuity – be dealt with by domain experts. Performances in football take irregular courses considering players’ position, health status, and psychophysical conditions. Each game period has an unsteady flow; it does not stop at each minute. It exhorts a non-sequential linkage of events often inaccurately collected, thus, prejudicing the quality of the output derived.
Data protection jigsaw of the new AI game-changing in professional football
In the state-of-the-art of football analytics, tactical or transfer choices portray decision-making not based solely on automated processing. The outcome (not output) of options fostered by coaches or general managers still report to human constructs, meaning the prohibition enshrined in Article 22 of the GDPR is not, should not, and must not in any way be subsumed to the current state of the art of Football 4.0, especially from the psychological viewpoint. DSS in AI-based football analytics does not yet even technically constitute an exclusively automated decision. Data scientists cannot precisely scrutinise how or if this sporting category should be treated as a whole task, or rather if the specific prediction and optimization problems are not but just one part of the overall multiple match dynamics. AI experts cannot either determine which personal, team or environmental metrics can feature complex and non-deterministic milieus such as those mentioned above. Consequently, assess the level of reliability and accuracy as a performance measure and the subsequent system’s trustworthiness are both goals stamped by great uncertainty in datasets correctness. There must not be predisposition of the representatives involved in the human-machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a system that is not fully credible. Today, though, AI-based assessments still enable coaches et al. to intuitively perceive weaker or stronger players’ physical or mental status. There are no robot coaches or machine-managers yet capable of replacing mortal beings completely. AI in sports businesses still implies a human commitment, a particular judgment and further choice.
A privacy-fairer sporting discipline like never before
As it now should be clear, the present-day multidisciplinary nature of Law Engineering of AI closes here, too, a short intersection between labour, sports and data protection law. Submitting footballers to a whole set of duties entails fine-tuning the way to tweak legal tools to the rights and freedoms regarding personal data processing in football employment-based relationship. For instance, on a preventive tenacity, both law experts and legislative bodies have emphasized the dangers AI unleashes to the right to protection of personal data, enshrined in Article 8(1) of the EU Charter of Fundamental Rights and Article 16(1) of the Treaty on the Functioning of the EU. In fact, autonomous estimations (or players’ profiling, determined according to Article 4(4) of the GDPR) in Football 4.0. often draw privacy-invasive, non-intuitive and unverifiable inductions concerning footballers. Nonetheless, not all semi-automated self-learning models or mining methods necessarily entail creating new opportunities for the unfinished and overstated symphony within unfair, discriminatory (or rather, more precisely, biased) automated processing, following especially Article 5(1)(a) of the GDPR. For the time being, two criticisms can be made as to the inadequacy of the scheme in force:
- Firstly, the widespread distinction between types of personal data based on identifiability and sensibility does not make any sense when applied to AI semi-automated decision-making. This legal imbroglio or looping was both the easiest and worthiest path policymakers could have taken. In here, too, inferred data – i.e., the automated output – may contribute to perform better or fulfil both clubs and their workforces’ duties, correspondingly. Neither the outcome is machine-made, nor does every way of correlating personal data in AI on field-based monitoring leads to taste-based discrimination. Making it simple: applying inferred knowledge, whether in in-game environments or training sessions, does not preclude the principle of non-discrimination as enshrined in Article 1(2) of the Convention No.111 of the International Labour Organisation, as well as internal rules of member states, among others, established in Article 88(1) of the GRPR. That is, any distinction, exclusion or preference regarding professional sporting practices – those inherent to the specific duties, terms and conditions laid down by this sport-related professional activity-, shall not be hastily considered algorithmic discrimination. On the contrary: using AI as a means for distributive justice in elite football makes it possible to allot burdens to individual skills, rate a player on the transfer market according to their value estimation and proportionally provide salaries that may vary according to the quality of performances exhibited. The only potential deviant semi-automated handling refers to the usage of input attributes concerning age, which is conceivable due to the influence that the deterioration of athlete skills beyond 35 years old has on their performances. Hence, age standard capability is the main argument why an employment-based agreement in football is of rapid wear. Clauses usually are agreed based on short-terms due to the decrease in the athletes’ level of competitiveness, which clubs cannot or must not afford. And they do so legitimately by the nature of this sporting discipline in itself.
- Secondly, despite all the European legislator’s attempts to uniformize theoretical safeguards, professional footballers do not yet have conscientiousness of the risks envisaged and on how to exercise their rights concerning AI-based techniques. As a consequence, consent is in no way conceived in its validity. Besides, since there is no strict need to use AI for players to maintain their psychophysical conditions or check their health status, as well the legitimate interest of predictive analysis is not contemporary and real, law engineers of or in modern tech law find no basis for lawfulness.
In matters of Football 4.0., the protection of personal data is not yet prius, nor posterius, it is not given, nor the solution, it is not in the beginning, nor in the end. It simply does not exist, but it specifically should. The legal framework in force leaves football professional practitioners lonely at the mercy of unworkable protection, namely in line with the digital illiteracy that their modus operandi highlights. To make it clear: tech-sources involved in AI decision-making are not reliable, sometimes statistically discriminative, but still preferable than any even biased, erratic, or ill-intentioned human decision-maker. European lawmakers on personal data protection should primarily base regulatory policies on its usage and impact, and secondarily, on its source.
Published under licence CC BY-NC-ND.
** This article is based on the paper:
Analide, C; Morgado Rebelo, D. (2020). “A New AI Power Play for Winners in the Employment relationship of Professional football: How Lawful is Artificial Intelligence to the Upcoming Portuguese ‘Big Data Ball’ Championship?”. in Anuário E. Tec 2020 – Artificial Intelligence & Robots, coord. Maria Miguel Carvalho, JusGov, Universidade do Minho, 171-202. Available at https://www.jusgov.uminho.pt/pt-pt/publicacoes/an uario-etec-2020-2/ (accessed on January 18, 2021).