Introduction
Digital technologies can have a transformational impact on health and lives of people,
particularly those who are structurally discriminated against in different ways, such
as by improving access to health information and data and improving diagnostic services.
Technology such as Artificial Intelligence (AI) has great potential to help women,
girls and gender-diverse people take better control of their health, bridging gaps
in access to health-related information and education by providing timely, accurate,
personalised answers to health questions. This could be a game-changer, especially
in settings where certain health issues such as sexual and reproductive health and
rights (SRHR) are still considered taboo.
But it is now widely recognised that AI and digital technologies are neither inherently
empowering nor sexist; they reflect the values of their contexts and creators. For
example, the rapid adoption of digital technologies aimed at increasing efficiency
of health care delivery has led to increased inequities and inequalities in health
care resources in some contexts.
1
There is grave concern that the Internet is being weaponised to silence and target
women’s, girls’, and queer voices. For example, a 2021 study revealed that 73% of
women across the globe have experienced some form of online violence on Twitter,
2
and a 2023 study revealed that 20% of respondents who identified as transgender or
gender-diverse and experienced tech-facilitated violence also reported severe impacts
to their mental health including their desire to live.
3,4
Evidence points towards distinct geographic, economic, and social gaps in design and
access to these technologies, including those related to gender, disability and race.
5,6
For example, male (or family/community) gatekeepers often control or restrict access
to devices and the internet.
7
Several studies report these restrictions to be greater for younger women and girls.
8,9
The Internet has the potential to make health-related information more accessible.
However, in the context of health services such as abortion, the level of misinformation
has been described as the “next infodemic”
10
and questions have been raised about the role of social media platforms in propagating
it.
11
Recent data from the United States reveals that telehealth interventions, including
“digital abortion clinics” that connect patients with health care providers, have
been effective in increasing access to abortion care in remote areas, and where it
is criminalised.
12
However, a 2022 investigation revealed that Hey Jane – a US-based online abortion
pill provider – used tracking tools that passed along user information to Meta, Google,
and other companies.
13
Restrictive policy regimes that impair access to health care for women and gender-diverse
people translate into the digital space, leading to an increase in surveillance and
inhibiting access to services such as safe abortion.
This commentary examines the challenges and implications for SRHR and bodily autonomy
in a digital world and highlights the critical knowledge and policy gaps to be addressed
to ensure that the transformational potential of technology reaches those who need
it the most while safeguarding their human rights.
SRHR and bodily autonomy in a digital world
As restrictions and attacks on bodily autonomy intensify in many parts of the world,
the datafication of our lives is also accelerating. Our bodies no longer exist just
in the physical world – they also exist in the digital domain: for example, in the
data about how many steps we took today, about the locations we visited, the routes
we travelled, the information we searched for, the food we ordered, the financial
transactions we made, and the data we shared with software applications about our
physical health or mental health. AI can assemble these disparate pieces of data to
create profiles of each of us – where the digital puzzle pieces of our bodies and
lives are put together to form personal, sensitive, identifiable wholes.
The implications are deep and require careful consideration. For instance, physical
or mental health apps can share data about our bodies or minds with our health insurer,
or our existing or potential employer. Our internet search for abortion pills, and
chat app communication with our family about our body, and our personal choice, could
be handed over to the government by Google and Facebook to build a case to prosecute
if these services are criminalised in a particular setting. This is exactly what has
happened recently in the US, where a woman and her daughter stood trial in the state
of Nebraska for performing an illegal abortion – with a key piece of evidence provided
by Meta, the parent company of Facebook.
14
As abortion bans across America are implemented and enforced, TechChrunch (an online
newspaper) reports that law enforcement is turning to social media platforms to build
cases to prosecute women seeking abortions or abortion-inducing medication – and online
platforms like Google and Facebook are helping.
15
This is not just happening in America, but rather is a growing trend, including in
low- and middle-income countries, where the internet is being used to target both
individual women and providers of safe abortions. For example, in Kenya, where abortion
is criminalised,
16
the internet is one of the main sources of SRHR-related information for young people
but also puts them at risk of surveillance, mis- and dis-information.
17
Healthcare providers are now beginning to navigate these complex ethical and legal
issues, but much remains to be done given the absence of a common global understanding,
definitions, and data protection regulations. A 2022 study revealed that 87% of the
23 most popular women’s health apps on the current international market (the App Store
on the Apple operating system [iOS] and Google Play on the Android system) shared
user data with third parties, but only half really requested consent from their users.
18
As Kelly and Habib in their Commentary in this series point out, data and privacy
around fertility apps have increasingly been called into question in a post-Roe America.
19
Some of the greatest risks that women and girls and gender-diverse people face with
digital technology, include online harassment, cyber-bullying, cyberstalking, unsolicited
sexual messages or images, nonconsensual sharing of intimate photos, child sexual
exploitation and abuse, as well as data security and privacy risks.
20
According to the UNICEF 2020 report, women and girls, and people of diverse genders,
are at greater risk of digital harm. According to the report, 52% of young women globally
have experienced some form of digital harm, and 87% of them believe the problem is
getting worse.
5
Social media, in particular, is perceived as an unsafe space, with 68% of online abuse
of women and girls taking place on social media platforms.
21
The violations of human rights of women and girls and LGBTI persons is reported widely;
however, online environments continue to operate as an accountability free zone.
Evidence also points towards inherent bias in algorithmic models used by social media
platforms against gender equality and SRHR, which is further compounded by the refusal
of big tech companies to address abuse and attacks on gender and SRHR advocates. In
2021, anti-gender equality activists conducted a targeted campaign against a well-known
Somali women’s rights activist who uses her Facebook page to support survivors of
domestic violence and rape in the Somali diaspora. She was restricted from posting
by Facebook as her content had been repeatedly mis-flagged as inappropriate,
22
demonstrating how “community standards” can be arbitrary, whereby tech companies can
ultimately create and enforce their own norm systems with limited or no accountability
or, alternatively, could be misused by the state.
Unfortunately, this is not an isolated incident. There are many other varied and concerning
examples of how social media platforms can silence specific users. It now seems undeniable
that certain online spaces are fertile ground for harmful activity targeting gender
norms. A recent report by ODI, “Hidden in plain sight”
23
demonstrates how the infrastructure of social media shapes gender norms, unpicks how
the technological design, profit models, and organisational hierarchy all give way
to patriarchal norms, and in doing so, perpetuate sexist, heteronormative and racist
stereotypes. The absence of meaningful due-diligence and precautionary approaches
has exposed wide accountability deficits in legal regimes to address these situations.
Furthermore, recent evidence points to a thriving digital economy based on gender
trolling and violations of human rights, disincentivising meaningful action on these
issues by tech companies.
In addition to gender bias, AI faces other diversity bias challenges, including race.
For example, face recognition algorithms which were studied by Algorithmic Justice
League found that the share of input images on which various facial recognition algorithms
were based consisted of 80% images of white persons and 75% male faces. As a result,
the algorithms had a high accuracy of 99% in detecting male faces. However, the system’s
ability to recognise black women was significantly lower at only 65% of the time.
This has two main implications: first, homogeneous datasets will lead to biased algorithms
– making AI-driven digital health interventions ineffective or actively harmful for
excluded communities. Second, the technology needed to drive these innovations is
designed in most cases by male coders – even though this is beginning to change in
the fem-tech industry. In order for there to be more diversity and representation
in datasets, there needs to be wider access, meaningful use, and more inclusive design.
24
Thus, as efforts are made to digitise health systems, the importance of ensuring an
intersectional approach goes to the heart of addressing the multitude of issues confronted
with AI. It is imperative that ethical issues in digital health solutions are anticipated
before they arise, and that they are treated with the same commitment to the review,
the oversight, the human rights dimensions commonly applied throughout other areas
of health. For example, at the design stage there need to be validation studies that
take into account the diversity of the populations, informing machine learning algorithms
and advancing transparency of the underlying data sets informing the predictive models
on which clinical decisions are expected to be made. An important step in this direction
is the informed involvement of the served and impacted communities in the design and
ideation processes.
Governance deficits and business model of big tech companies
To understand the situation, one needs to look at the full spectrum of issues, which
also include the business model of big tech companies; how decisions are made with
regards to gender and digital technologies; and the algorithmic models and codes which
underpin content on social media platforms.
The depth and breadth of personal data being generated, stored, and used grows exponentially,
and existing approaches to data governance have not kept pace, resulting in severe
challenges to gender equality. It is evident that seemingly anonymised personal data
can easily be de-anonymised by AI and facilitate tracking, monitoring, and profiling
of women and girls and other gender diversities as well as predicting behaviours.
Together with facial recognition technology, such AI systems can be used to cast a
wide network of surveillance. Critically, given that law is silent on many privacy
issues, mere compliance with regulations, some of which may be significantly outdated
and not aligned with technological advances, no longer suffices. In this context,
while menstrual apps are often promoted and perceived as digital tools that enhance
women’s bodily autonomy, the aforementioned power asymmetries highlight the need for
further unpacking how to truly ensure autonomy in digital space.
The gender costs of business models based on data extraction, the concentration of
power, and – fundamentally – privacy and data protection legislation that is not keeping
up with digital transformations, require urgent action. Big Tech companies often operate
in what can best be described as neo-colonialist enterprises in an accountability
freezone.
Clearly, data governance needs to be reimagined. The Lancet and Financial Times Commission
on Governing Health Futures argues that “The governance of digital technologies in
health and health care must be driven by public purpose, not private profit”. The
Commission has called for “a new approach to the collection and use of health data
based on the concept of data solidarity, with the aim of simultaneously protecting
individual rights, promoting the public good potential of such data, and building
a culture of data justice and equity”.
25
The present safeguards are woefully inadequate. The language of data protection itself
needs to be questioned. Data is often referred to as oil and hence, a majority of
existing frameworks focus on the protection of data as a resource. However, in a world
where physical and digital identities are inextricably linked, an embodied approach
to data governance and a holistic understanding of bodily autonomy and integrity,
dignity and freedom, that spans across the digital and physical spaces, are critical
to fundamentally improving things. For example, consent-based frameworks must be viewed
through a prism of relational autonomy rather than individual autonomy, in that conditions
must be created for informed data ownership at the individual level before consent
can be given for data use.
Conclusion
If the potential of digital technology for safeguarding and enhancing sexual and reproductive
health and rights is to be realised, we must ensure the widest possible public debate
about alternative approaches to data governance. Given the ubiquity of technology-facilitated
gender-based violence, existing power asymmetries in the design, deployment, and regulation
of new technologies, including gender biases in big data sets and the algorithms that
use them, we need to position rights of women and girls and gender diverse people
at the centre of these debates, by ensuring meaningful participation and voice in
decision making related to digital technologies.