AIIBAsiaCybersecurityDevelopmentHuman RightsOpinionOpinionScience and Technology

A Digital Future Requires Strong Human Rights Frameworks

Image Credit: Business & Human Rights Resource Centre

 

By Kendra Brock

 

“There is nothing more dangerous [when building a smart city] … than a stupid mayor and an eager company putting strange stupidities into the heart of the city.”- Antoni Vives, Former Deputy Mayor of Barcelona.

 

Nearly seven years after Vives made this comment at a global conference on smart cities, it remains incisively relevant. At a recent event on smart cities hosted by the Center for Strategic and International Studies, Abha Joshi-Ghani, a Senior Advisor on Infrastructure, Public-Private Partnerships and Guarantees at the World Bank, drew on this quote in arguing that economically and socially viable smart cities require not just technology, but good governance systems and the thoughtful identification of desired outcomes.

 

ICT projects, and particularly smart systems (such as smart cities), differ from other infrastructure projects in that they can both collect and respond to data on users. As ICT systems shape access to and usage of the digital public square and AI systems are delegated responsibility for decision-making tasks, they have the potential to fundamentally shape the balance of power between citizens and governments, and to both marginalize and empower vulnerable populations.

 

At the same time, increased investment in ICT infrastructure is direly needed, especially in Asia: this year’s telecom infrastructure financing gap in Asia is $16 billion, according to the G20’s Global Infrastructure Outlook. The Asian Infrastructure Investment Bank (AIIB), the world’s second largest multilateral bank (after the World Bank), is poised to play a critical role. In recognition of the opportunities for economic growth and improved environmental sustainability offered by the rapidly growing digital industry, as well as growing digital divides, the AIIB released a Draft Digital Infrastructure Sector Strategy earlier this month for public comment. However, while the draft strategy identifies inclusion as a primary objective, the indicators and regulatory frameworks identified by the draft strategy are likely to leave vulnerable populations at risk, and jeopardize human rights generally. In order to achieve its objective of inclusion, the AIIB must integrate human rights protections into its indicators and advance regulatory frameworks grounded in human rights.

 

Take, for example, the smart grid, an application of digital infrastructure that allows for the allocation of electricity in response to usage patterns, reducing energy waste by making it more likely that power will be sent where it is needed. Based on the draft strategy, success of the smart grid would be measured by increased access to digital services and increased efficiency, which is defined as “cost reductions, value creation, resilience, and longevity.” But should smart grids optimize access and efficiency if it jeopardizes a steady supply of electricity to vulnerable facilities such as hospitals? The delegation of decision making to AI systems in smart systems makes it critical that societies (and the AIIB) carefully consider the tradeoffs between efficiency and equity before implementation and measure appropriately.

 

Smart grids, and other digital infrastructure, also have the potential to be used as a tool for government surveillance. In addition to data on levels and timing of energy usage, smart grid cables can also be used for high-speed telecommunications networks, further expanding the broad data collection capacities of smart grids. The draft strategy classes such risks to privacy as a regulatory risk and states that the AIIB will “[ensure] compliance of its investments with country-level regulations and laws. AIIB does not have a role in policy changes.” However, risks to privacy are often inadequately addressed by regulation, and may actually be exacerbated by it, as when cybersecurity laws grant sweeping government access to citizens’ data.

 

Domestic regulations in many of the AIIB’s regional member states are unlikely to prioritize privacy protection, at least from the government itself: half of AIIB regional member states ranked by Freedom House’s Freedom on the Net project are categorized as ‘not free’, with only two states ranked as ‘free’ in its 2019 report. In order to ensure that ICT systems receiving AIIB financing do not exacerbate human rights abuses, the AIIB should instead recognize the potential for human rights violations as an independent risk category, with indicators grounded in international human rights standards.

 

Religious and ethnic minorities are particularly vulnerable to technology-mediated human rights abuses as a result of the draft strategy’s definition of indicators. Although the strategy identifies “a positive impact on inclusion” as one of its three guiding principles, the strategy operationalizes this objective by measuring the number of people with access to internet and digital services, with disaggregation by gender, rural / urban, income levels, and disabilities. While these are important populations to target for increased access, failure to specifically disaggregate data for ethnic minorities means that barriers due to the lack of digital services in appropriate languages are likely to be overlooked. Failure to specifically measure digital access for religious or ethnic minorities may also conceal the intentional denial of digital access to these minorities by the government.

 

Nor is digital access an adequate proxy for the AIIB’s objective of inclusion: bridging the digital gap implies not only access, but an examination of barriers to equitable participation. For example, not only are women less likely to have access to digital technologies than men, but those who are able to use those technologies receive twice as many threats online as men. In Myanmar and Sri Lanka, increased internet access also came with a weaponization of social media against ethnic and religious minorities. Given the range of potential outcomes offered by ICT systems, it is critical that the AIIB integrate human rights throughout its framework and indicators, including by identifying indicators that will actually advance an inclusive society.

 

In recognition of the unique nature of ‘soft’ infrastructure systems (systems like smart grids, which involve both hard physical infrastructure and the technology that manages it), the AIIB does allow for consultation with financiers, leading digital firms, regulatory agencies, “and other stakeholders” in order to inform its best practices. To fully manage human rights risks and ensure inclusivity, the AIIB must ensure that it makes specific provision for consultation with civil society groups working on privacy and the intersection between tech and human rights. The

 

AIIB’s Environmental and Social Framework (pdf), which the draft strategy draws upon to “help ensure development outcome of Digital Infrastructure Investment,” (sic) allows for “public consultation and disclosure of information on the environmental and social risks and impacts of Projects.” However, given the interconnected and often cumulative nature of ICT systems, the AIIB must also draw upon public consultation to guide the overall usage of ICT infrastructure.

 

In order to create robust best practices, the AIIB must both work locally (by involving local civil society organizations) and act globally. The storage of data across borders and the cross-border nature of underlying internet architecture (the internet cables and connection hubs that allow for global flows of data) mean that threats to privacy and cybersecurity are not contained within borders. While the draft strategy allows that, due to the uneven and quickly developing regulatory policies, “the Bank sees a role in promoting the adoption of internationally recognized regulatory standards and the country level,” the absence of strong international standards (rather than regional standards, such as the GDPR) on technologies like facial recognition means this approach amounts to little more than an abdication of responsibility for the impacts of AIIB-financed ICT systems on human rights. Instead, the AIIB should advance clear regulatory standards guided by human rights at all levels of its work, which will both ensure that ICT systems do not exacerbate violations of rights and will ensure that digital access promotes equitable societies.

 

 

The Asian Infrastructure Investment Bank’s Draft Digital Strategy is open to public consultation until early March 2020. The strategy and the underlying sector analysis are available here.

 

Kendra Brock holds an M.A. in Diplomacy from Seton Hall University and is a former deputy editor-in-chief of the Journal of Diplomacy. She researches the impact of digital and infrastructure projects on societies along the Belt and Road. She tweets @kendraebrock.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *