Summary of the Meeting of the National Security Transparency Advisory Group (NS-TAG) - February 23, 2022
Held via Videoconference
- Lesley Soper (standing in for Dominic Rochon)
- Thomas Juneau
- Bessma Momani
- Mary Francoli
- Harpreet Jhinjar
- Justin Mohammed
- Jeffrey Roy
- Daniel Jean
- Jillian Stirk
- Khadija Cajee
- “Connecting with Diverse Communities: Enhancing how National Security Organizations Engage, Build Trust, and Evaluate Success” – Part Eight
Invited Guests and Speakers:
- Ashley Casovan – Executive Director, Responsible AI Institute
- Barbara Perry – Director, Centre on Hate, Bias and Extremism, Ontario Tech University
National Security Community Members Present (as observers):
- Canada Border Security Agency (CBSA)
- Communications Security Establishment (CSE)
- Financial Transactions and Reports Analysis Centre of Canada (FINTRAC)
- Global Affairs Canada (GAC)
- Public Safety Canada (PS)
- Royal Canadian Mounted Police (RCMP)
- Canadian Security and Intelligence Service (CSIS).
- Opening of the Meeting and Roll Call
- Discussion Session with Guest Speakers: “Connecting with Diverse Communities: Enhancing How National Security Organizations Engage, Build Trust, and Evaluate Success” – Part Eight
- Discussion on the Group’s Third Report
The eighteenth virtual NS-TAG meeting took place on February 23, 2022, on the theme “Connecting with Diverse Communities: Enhancing How National Security Organizations Engage, Build Trust, and Evaluate Success” – Part Eight. Guest speakers discussed the importance of how AI can contribute to transparency, enhance public trust, and empower both people and businesses. They looked at different perceptions of policing practices and emphasized the importance of continuing proactive engagement and transparency within communities.
Key Takeaways of Guests’ Remarks and of the Discussion Session:
- Industries across various fields that harnessed AI a few years ago are seeing significant benefits today. For example, machine learning platforms can analyze client data to determine loan eligibility or increase rates of employment through recommendation systems.
- AI is valuable in certain areas but can have negative impacts. For instance, algorithms can fail to recognize gender and other biases in hiring practices, and still struggle to recognize people with disabilities, people of color, and other groups.
- Overall public trust in AI is low. The public should be aware of AI use and have access to appropriate recourse mechanisms if problems arise.
- Responsible AI is the practice of utilizing AI in a way that empowers people and businesses. This can be done by leveraging the benefits of AI, decreasing its negative effects on people and the environment, reducing the economic and reputational impact of problematic application of AI tools, and promoting educational and transparency resources that will increase public trust in AI.
- When asked about what advice could be offered to reduce inherent bias within government institutions’ AI systems, one guest speaker suggested bias tests be used, as well as mechanisms that notify the user if an AI system is found to have bias.
- On the question of policing, according to a 2019 General Social Survey, data revealed that it was not just communities who feel that police are not addressing crime effectively, but even law enforcement officials themselves feel that their responses are not always adequate.
- Law enforcement found that a strong community trust deficit exists within certain social groups. This means that some marginalized communities do not feel they are being taken seriously by law enforcement or simply do not feel safe approaching them. Specifically, Muslim communities experience bias, discrimination and Islamophobia from law enforcement.
- At the same time, other communities feel quite differently; they feel that police are actively there for them and are present in ensuring their safety.
- For many people, the disparity in policing has been illustrated in the way police handled the Freedom Convoy versus the Black Lives Matter and Indigenous Lives Matter movements. The Critical Infrastructure Defence Act did not seem to apply to the Freedom Convoy occupiers, despite it being used to remove protests organized by Indigenous people.
- When asked about how social media influenced the reporting of hate crimes, one guest speaker replied that the use of social media likely exacerbates hate, but that there is not sufficient data on this.
- Asked whether it is difficult for governments to try to find the right balance in terms of curation and censorship online, one guest speaker replied that it is multi-layered, and legislation needs to be used. More effort is required to support civil society organizations that monitor hate speech or are creating strategies for youth and adults to develop the resilience to recognise hateful narratives and identify when they are being groomed and recruited. Prevention is a high priority to build up civil society capacity.
- Highlighting the contributions made by marginalized communities to Canadian society helps to give a better understanding of communities, particularly since the media often focuses on negative narratives related to certain groups.
- There is a serious gap in building critical digital literacy. It is important to promote closing this literacy gap to counter disinformation. An app that could flag possible misinformation could be explored.
- When asked about what outreach tools work or do not work in terms of engagement efforts, one guest speaker suggested continuing to focus on transparency, including publishing annual reports (and responses to them) to ensure accountability. It is also important to approach these issues with a global perspective and understand the context and history of the communities they deal with.
- Additionally, a proactive presence and engagement was suggested. This means engaging with the communities not only when there is a need to gather intelligence or when problems arise, but also sharing information with communities and establishing a two-way dialogue with grassroots actors.
Report a problem or mistake on this page
- Date modified: