Technocrats in the EU are bent on total surveillance and controlling people, but a high-level, independent advisory group has warned against using AI for mass surveillance and social credit scoring. ⁃ TN Editor
An independent expert group tasked with advising the European Commission to inform its regulatory response to artificial intelligence — to underpin EU lawmakers’ stated aim of ensuring AI developments are “human centric” — has published its policy and investment recommendations.
This follows earlier ethics guidelines for “trustworthy AI”, put out by the High Level Expert Group (HLEG) for AI back in April, when the Commission also called for participants to test the draft rules.
The AI HLEG’s full policy recommendations comprise a highly detailed 50-page document — which can be downloaded from this web page. The group, which was set up in June 2018, is made up of a mix of industry AI experts, civic society representatives, political advisers and policy wonks, academics and legal experts.
The document includes warnings on the use of AI for mass surveillance and scoring of EU citizens, such as China’s social credit system, with the group calling for an outright ban on “AI-enabled mass scale scoring of individuals”. It also urges governments to commit to not engage in blanket surveillance of populations for national security purposes. (So perhaps it’s just as well the UK has voted to leave the EU, given the swingeing state surveillance powers it passed into law at the end of 2016.)
“While there may be a strong temptation for governments to ‘secure society’ by building a pervasive surveillance system based on AI systems, this would be extremely dangerous if pushed to extreme levels,” the HLEG writes. “Governments should commit not to engage in mass surveillance of individuals and to deploy and procure only Trustworthy AI systems, designed to be respectful of the law and fundamental rights, aligned with ethical principles and socio-technically robust.”
The group also calls for commercial surveillance of individuals and societies to be “countered” — suggesting the EU’s response to the potency and potential for misuse of AI technologies should include ensuring that online people-tracking is “strictly in line with fundamental rights such as privacy”, including (the group specifies) when it concerns ‘free’ services (albeit with a slight caveat on the need to consider how business models are impacted).
Last week the UK’s data protection watchdog fired an even more specific shot across the bows of the online behavioral ad industry — warning that adtech’s mass-scale processing of web users’ personal data for targeting ads does not comply with EU privacy standards. The industry was told its rights-infringing practices must change, even if the Information Commissioner’s Office isn’t about to bring down the hammer just yet. But the reform warning was clear.
As EU policymakers work on fashioning a rights-respecting regulatory framework for AI, seeking to steer the next ten years+ of cutting-edge tech developments in the region, the wider attention and scrutiny that will be drawn to digital practices and business models looks set to drive a clean up of problematic digital practices that have been able to proliferate under no, or very light touch, regulation prior to now.
The HLEG also calls for support for developing mechanisms for the protection of personal data, and for individuals to “control and be empowered by their data” — which they argue would address “some aspects of the requirements of trustworthy AI”.
“Tools should be developed to provide a technological implementation of the GDPR and develop privacy preserving/privacy by design technical methods to explain criteria, causality in personal data processing of AI systems (such as federated machine learning),” they write.
“Support technological development of anonymisation and encryption techniques and develop standards for secure data exchange based on personal data control. Promote the education of the general public in personal data management, including individuals’ awareness of and empowerment in AI personal data-based decision-making processes. Create technology solutions to provide individuals with information and control over how their data is being used, for example for research, on consent management and transparency across European borders, as well as any improvements and outcomes that have come from this, and develop standards for secure data exchange based on personal data control.”
Other policy suggestions among the many included in the HLEG’s report are that AI systems which interact with humans should include a mandatory self-identification. Which would mean no sneaky Google Duplex human-speech mimicking bots. In such a case the bot would have to introduce itself up front — thereby giving the human caller a chance to disengage.
The HLEG also recommends establishing a “European Strategy for Better and Safer AI for Children”. Concern and queasiness about rampant datafication of children, including via commercial tracking of their use of online services, has been raised in multiple EU member states.
“The integrity and agency of future generations should be ensured by providing Europe’s children with a childhood where they can grow and learn untouched by unsolicited monitoring, profiling and interest invested habitualisation and manipulation,” the group writes. “Children should be ensured a free and unmonitored space of development and upon moving into adulthood should be provided with a “clean slate” of any public or private storage of data related to them. Equally, children’s formal education should be free from commercial and other interests.”
Member states and the Commission should also devise ways to continuously “analyse, measure and score the societal impact of AI”, suggests the HLEG — to keep tabs on positive and negative impacts so that policies can be adapted to take account of shifting effects.
“A variety of indices can be considered to measure and score AI’s societal impact, such as the UN Sustainable Development Goals and the Social Scoreboard Indicators of the European Social Pillar. The EU statistical programme of Eurostat, as well as other relevant EU Agencies, should be included in this mechanism to ensure that the information generated is trusted, of high and verifiable quality, sustainable and continuously available,” it suggests. “AI-based solutions can help the monitoring and measuring its societal impact.”
The report is also heavy on pushing for the Commission to bolster investment in AI — calling particularly for more help for startups and SMEs to access funding and advice, including via the InvestEU program.
Another suggestion is the creation of an EU-wide network of AI business incubators to connect academia and industry. “This could be coupled with the creation of EU-wide Open Innovation Labs, which could be built further on the structure of the Digital Innovation Hub network,” it continues.
There are also calls to encourage public sector uptake of AI, such as by fostering digitalisation by transforming public data into a digital format; providing data literacy education to government agencies; creating European large annotated public non-personal databases for “high quality AI”; and funding and facilitating the development of AI tools that can assist in detecting biases and undue prejudice in governmental decision-making.
The Liberty Beacon Project is now expanding at a near exponential rate, and for this we are grateful and excited! But we must also be practical. For 7 years we have not asked for any donations, and have built this project with our own funds as we grew. We are now experiencing ever increasing growing pains due to the large number of websites and projects we represent. So we have just installed donation buttons on our websites and ask that you consider this when you visit them. Nothing is too small. We thank you for all your support and your considerations … (TLB)
Comment Policy: As a privately owned web site, we reserve the right to remove comments that contain spam, advertising, vulgarity, threats of violence, racism, or personal/abusive attacks on other users. This also applies to trolling, the use of more than one alias, or just intentional mischief. Enforcement of this policy is at the discretion of this websites administrators. Repeat offenders may be blocked or permanently banned without prior warning.
Disclaimer: TLB websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.
Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.