For decades, women have been vastly under-represented in medical research and trials. That’s starting to change — Health Canada’s mandate that women be included in clinical trials in 1997 was a helpful start — but many of the protocols for diagnosing and treating patients are based on what works for men. That means women are taking drugs that have been dosed for the average male body weight and are experiencing unanticipated side effects because of biological differences. In fact, women are 50 to 75 per cent more likely to experience an adverse reaction to medication than men are.
Their symptoms are also less likely to be taken seriously. For instance, while heart disease is the leading cause of premature death in women in Canada, early heart attack signs were missed by doctors in 78 per cent of women, according to the Heart and Stroke Foundation. And issues facing women specifically receive far less attention than those affecting only men — five times more money goes to erectile dysfunction research than studies on premenstrual syndrome.
In recent years, researchers have been looking to artificial intelligence (AI) to improve health care. Experiments with the technology have shown promise in smoothing patient flows, helping to diagnose skin concerns, decreasing human hours spent on administrative details, prioritizing patient needs and facilitating drug discoveries, among other things.
So could AI help close the knowledge gap and improve health care for women and other under researched groups? It’s an exciting prospect and entirely conceivable, says Ashley Casovan, the executive director of the Responsible AI Institute in Ottawa. The non-profit organization is working to advance the responsible adoption of AI systems through the development of a certification program. “There are a ton of potential opportunities,” she adds — but only if there is rigorous oversight and inclusivity along every step of the way.
AI is only as good as the data it has access to. Isn’t there a chance these algorithms could just perpetuate flawed assumptions and misunderstandings about women’s health?
There are a lot of concerns about AI systems being used in health care — building these tools from historical biases and even the systemic issues that exist in health care, where women don’t have the same degree of access and there’s not the same amount of research to understand female-specific conditions.
Then, these bleed into the understanding of how an AI is going to be trained. If it’s not built in a responsible manner, it could perpetuate those systemic human-grown challenges in health care.
We’ve seen this, for example, with Google’s initial pass at creating a healthier diagnosis and treatment for skin conditions. (Black people made up 6.8 per cent of the sample data set.) It’s important to make sure that these systems are equitable for all.
Are you an optimist when it comes to this technology? Is inclusive AI actually possible in health care?
There’s a way to make it more equitable by identifying the issues and then establishing how some of the risks and harms to specific groups can be rectified through design, development and operational processes.
Responsible AI is trying to establish standards around reviewing the systems before they’re deployed. That means looking at things like what data is used to train the model, what type of consent is required from the patients, whether a system is augmenting the doctor’s decision-making or is more of an end-to-end process. And when the process is being used, we need to ensure that people understand the implications and, if something goes wrong, there are recourse mechanisms to support them.
Recommendations have been made to make sure that people that are impacted — women, different minority groups — are included in the design and development of those systems. The whole process needs to be more inclusive.
What is the potential for AI to level the playing field for women when it comes to health care?
It’s like how seatbelts are made for men, not for women. How do you design something that’s more nuanced? With AI, there’s a possibility to have more options that are tailored to different groups. One key challenge is when gender differences intersect with other things. If you have a woman who also has existing preconditions, is there the appropriate medicine for that?
It comes back to the collection of data. If women, children and people of colour are under-represented in that data, then interpretations of it become skewed to the overrepresented populations.
What systems and regulations need to be in place?
There’s recognition at the federal level that AI needs oversight. That was documented in Bill C-27 last year, which combined AI with data privacy and consumer protection — all of these components go hand in hand.
The data input is one factor; the context in which that system is deployed is another. Canada recognizes that a one-size-fits-all policy isn’t going to serve all contexts of how AI is used.
Health Canada is thinking through what those sectoral regulations are. And since health care is governed provincially, it’ll work with the provinces.
The inequities in health care are recognized issues. They’re not new. They’re not because of AI. But we’re at this inflection point where we can either use technology to help us or to perpetuate these harms. And I don’t want to see the latter. There needs to be regulations, oversight and standards in place so AI can be used in a way that benefits all versus just some.
Ashley Casovan will join in a panel discussion on what it will take to transform leadership in health care at a special MaRS Morning, a networking breakfast and talk on March 8. Find out more here.
Torstar, the parent company of the Toronto Star, has partnered withMaRS to highlight innovations in Canadian companies.
DisclaimerThis content was produced as part of a partnership and therefore it may not meet the standards of impartial or independent journalism.
Post a Comment