Get Instant Access
to This Blueprint

Security icon

Conduct an AI Privacy Risk Assessment

Navigate AI privacy and data concerns with a comprehensive privacy impact assessment.

Achieving a carefully curated balance between innovation and regulation is a challenge for many organizations, often influenced by the following factors:

  • Uncertainty as to where data exists and what type of data exists within the organization.
  • Confusion around which data protection regulations apply and how they impact current data processing operations.
  • Lack of clarity as to what problems AI will solve for the business.

Our Advice

Critical Insight

Elevate your AI innovation by embedding privacy. As AI and privacy evolve, adapting privacy impact assessments (PIAs) is essential. Expanding the scope to include data governance for AI, incorporating ethical dimensions, fostering diverse stakeholder participation, and taking a continuous improvement approach to risk assessment is crucial for responsible AI implementation.

Impact and Result

  • Understanding of data privacy best practices and how AI can support a privacy-centric environment.
  • Guidance on completion of impact assessments that validate the integration of AI technology.
  • Ability to leverage privacy as a competitive advantage.

Conduct an AI Privacy Risk Assessment Research & Tools

1. Conduct an AI Privacy Risk Assessment Storyboard – Learn how to carefully balance the implications of data privacy adherence with adoption of AI technologies to drive efficiencies in the context of your business.

Assess the existing condition of data governance, privacy, and security within your organization as you prepare to develop or deploy AI technologies.

Delve into AI technologies by exploring responsible AI principles, along with the implementation of AI within the realm of data privacy.

2. AI Privacy Impact Assessment Tool – Leverage PIAs to empower your AI technology by integrating protection and management of personal data at scale.

PIAs enable you to assess privacy risks and identify mitigative actions throughout the lifecycle of AI models and systems in a structured method.

3. Sample AI PIA for Microsoft Copilot – Introduce Gen AI across Microsoft business applications.

As generative AI becomes more integrated into Microsoft business applications, interacting with AI will become business as usual for employees. Before your business implements Copilot features, it’s essential to understand the privacy impact.


Workshop: Conduct an AI Privacy Risk Assessment

Workshops offer an easy way to accelerate your project. If you are unable to do the project yourself, and a Guided Implementation isn't enough, we offer low-cost delivery of our project workshops. We take you through every phase of your project and ensure that you have a roadmap in place to complete your project successfully.

Module 1: Identify Privacy Drivers for Your Business

The Purpose

  • Identify the driving forces behind the privacy program.
  • Understand privacy governance.

Key Benefits Achieved

  • Privacy requirements documented

Activities

Outputs

1.1

Understand personal data and the need for a privacy program.

1.2

Discuss legal, contractual, and regulatory obligations.

1.3

Understand privacy regulation and AI.

1.4

Discuss privacy and data protection by design.

1.5

Define and document program drivers.

  • Business context and drivers behind privacy program

Module 2: Evaluate AI Through a Privacy Lens

The Purpose

  • Identify the driving forces behind the AI project.
  • Understand AI and data governance.

Key Benefits Achieved

  • AI and data requirements documented

Activities

Outputs

2.1

Understand types of AI and its advantages.

2.2

Discuss industry applications of AI-powered technologies.

2.3

Define and document AI project drivers.

  • Business context and drivers behind AI project
2.4

Understanding the importance of data governance for AI.

2.5

Discuss privacy-enhancing techniques for AI.

Module 3: Assess the Impact of AI Implementation

The Purpose

  • Analyze risk thresholds for the project and technology.
  • Evaluate risk scenarios to formulate remediation plan.

Key Benefits Achieved

  • Threshold analysis documented
  • Privacy impact assessment and report completed

Activities

Outputs

3.1

Conduct threshold analysis for AI project.

  • Completed threshold analysis determining need for a Lite or Full PIA
3.2

Document details of AI governance framework, technical and business requirements, and testing methods.

3.3

Document details of data governance structure for the AI system.

3.4

Document privacy practices pertaining to the AI project.

Module 4: Report the Impact of AI Implementation

The Purpose

  • Document risks mitigation actions.
  • Assign ownership and timeline to completion.

Key Benefits Achieved

  • Privacy impact assessment report completed
  • Identified risks remediated and/or mitigated to an acceptable level

Activities

Outputs

4.1

Document details of supply chain environments.

4.2

Document security practices pertaining to the AI project.

4.3

Identify potential risks and propose potential mitigation.

4.4

Prepare PIA report.

4.5

Debrief.

  • Completed privacy impact assessment and report

Conduct an AI Privacy Risk Assessment

Navigate AI privacy and data concerns with a comprehensive privacy impact assessment.

EXECUTIVE BRIEF

Analyst Perspective

Effective data privacy strategy leads to well-designed AI implementation.

The age of Exponential IT (eIT) has begun, with AI leaving the realm of science fiction and becoming an indispensable tool for business operations. It offers immense benefits but introduces ethical responsibilities and stringent compliance requirements, particularly when handling personal data. Organizations may become vulnerable to hefty fines and reputational risks if these requirements are not met.

Trust is a cornerstone of successful business relationships. Aligning AI technology with a privacy strategy generates trust among customers and stakeholders. Privacy-conscious consumers actively seek out businesses that prioritize data protection, offering organizations a competitive edge. Building trust through data privacy will strengthen your organization's market position. It will also encourage responsible innovation and collaboration by enabling secure and ethical data sharing with your business partners.

Data quality is pivotal for AI system performance. Aligning AI objectives with privacy requirements will enhance your data validation and quality checks, resulting in more effective AI models. Additionally, a proactive approach to data privacy will position your organization to be adaptable as regulations and consumer expectations evolve.

Prioritizing data privacy compliance emphasizes an organization's commitment to responsible data practices and risk management. Organizations that integrate AI with a privacy strategy will be better equipped for long-term success in a data-centric world while upholding individual privacy rights.

Safayat Moahamed
Safayat Moahamad
Research Director, Security Practice
Info-Tech Research Group

Executive Summary

Your Challenge

The pace set by rapid advancements in technology and the increased prevalence of AI forces IT and business leaders to engage in a state of constant evolution.

Simultaneously, data privacy regulations have become increasingly stringent in an attempt to safeguard personal information from manipulation.

AI relies on the analysis of large quantities of data and often involves personal data within the data set, posing an ethical and operational dilemma when considered alongside data privacy law.

Common Obstacles

Achieving a carefully curated balance between innovation and regulation is a challenge for many organizations, often influenced by:

  • Uncertainty as to where data exists and what type of data exists within the organization.
  • Confusion around which data protection regulations apply and how they impact current data-processing operations.
  • Lack of clarity as to what problem(s) AI will solve for the business.

Info-Tech’s Approach

Design an AI implementation that is guided by data governance and data privacy best practices.

  • Know the external (regulatory) environment.
  • Know the internal (organization) data environment.
  • Outline the potential AI use cases.
  • Assess your organization’s current privacy posture.

Effective AI implementation is built on a foundation of effective data privacy principles and awareness.

Elevate your AI innovation by embedding privacy. As AI and privacy evolve, adapting PIAs is essential. Expanding the scope to include data governance for AI, incorporating ethical dimensions, fostering diverse stakeholder participation, and taking a continuous improvement approach to risk assessment is crucial for responsible AI implementation.

Your Challenge

This research is designed to help organizations who are facing these challenges and are looking to/need to:

  • Develop a set of relevant use cases for AI implementation based on the industry and nature of the organization’s business.
  • Eliminate inefficiencies by streamlining less skilled tasks through use of AI.
  • Retain trust of workforce and consumers through ethical AI implementation.
  • Create or revise the current data governance structure within the context of the business.
  • Align data privacy practices of the organization with the scope of the external regulatory environment.
  • Ensure that data privacy becomes a standard preplanning process involved in all technology implementation projects.

65%

65% of consumers have lost trust in organizations over their AI practices.

92%

92% of organizations say they need to be doing more to reassure customers about how their data is being used in AI.

96%

96% of organizations agreed they have an ethical obligation to treat data properly.

Source: Cisco, 2023

“As artificial intelligence evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed.”

– Cameron Kerry, MIT Scholar, in Brookings, 2020

Data privacy: An enabler of AI

Data privacy measures enhance the efficacy and integrity of AI systems.

Data privacy and protection regulations, such as the EU’s GDPR, call into question many of the key data principles that AI is also subject to, including:

  • Profiling
  • Automating
  • Minimizing
  • Defined Period of Retention
  • Transparency
  • Right to Explanation
  • Purpose or Intent
  • Consent

While these concepts may appear contradictory when applied to AI-powered technologies, they are fundamental in ensuring the effective deployment of AI systems. Without data privacy best practices and principles of data governance, AI is like a ship without a compass.

50%

50% of organizations are building responsible AI governance on top of existing, mature privacy programs.

60%

60% of organizations stated AI impact assessments are conducted in parallel to privacy assessments.

40%

49% combine their algorithmic impact assessments with their existing process for privacy or data protection impact assessments.

Source: IAPP, 2023

Integrating responsible AI: Approach of business leaders

The rapid proliferation of AI is met with trepidation as business leaders carefully examine the challenges associated with implementation.

55%

55% of business leaders state that they are taking steps to protect AI systems from cyberthreats and manipulations.

41%

41% of business leaders are conducting reviews to be sure that third-party AI services meet standards.

52%

52% of business leaders state that they are taking steps to ensure AI-driven decisions are interpretable and easily explainable.

Source: ”2022 AI Business Survey,” PwC, 2022

Data strategy drives AI readiness.

57% of business leaders say they are taking steps to confirm their AI technology is compliant with applicable regulations.

Data governance is a key strategy for effectively managing data and keeping information protected.

Info-Tech Insight

Know your data and governance environment before you act. Scope the potential data that will be impacted and ensure appropriate controls are in place.

Responsible AI guiding principles

Without guiding principles, outcomes of AI use can be negative for individuals and organizations.

Data Privacy

AI systems must respect and safeguard individuals' privacy by addressing potential privacy impacts and implementing protective measures for sensitive data.

Explainability and Transparency

Individuals impacted by the AI system’s outputs must be able to comprehend the logic and why similar situations may yield different outcomes. Organizations have a duty to make AI system development, training, and operation understandable.

Fairness and Bias Detection

AI must align with human-centric values, which encompass core principles such as freedom, equality, fairness, adherence to laws, social justice, consumer rights, and fair commercial practices.

Accountability

This involves the responsibility of organizations and developers to ensure the expected functionality of AI systems they create, manage, or use, in line with their roles and relevant regulations, demonstrated through their actions and decision making.

Validity and Reliability

AI systems must function effectively under various use conditions and contexts. It involves assessing potential failure scenarios and their consequences.

Security and Safety

AI systems must not create undue safety risks, including physical security, throughout their lifecycle. Regulations on consumer and privacy protection define what constitutes unreasonable safety risks.

Responsible AI: Data Privacy, Explainability and Transparency, Fairness and Bias Detection, Accountability, Validity and Reliability and Security and Safety.

Source: Build Your Generative AI Roadmap

"On an operational level, there are several ways privacy processes are used for responsible AI. AI impact assessments are typically merged or coordinated with privacy impact assessments.”

– IAPP, 2023

Perform a PIA for Your AI Technology: Set your project up for success with a privacy impact assessment.

Microsoft case study: Responsible use of technology

INDUSTRY: Technology
SOURCE: World Economic Forum, 2023

In 2016, Microsoft grappled with challenges following the chatbot Tay racism fiasco. The experience motivated the company to incorporate ethics into its product innovation process. Microsoft acknowledges the profound influence of technology, advocating for responsible technology development to benefit society.

The company established six core ethical principles for AI: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.To translate these principles into practice, Microsoft outlined steps to support the development of ethical AI systems. Sensitive cases require reporting. In 2019, Microsoft introduced a responsible AI training course, mandatory for all employees.

Now, the company employs practical tools to facilitate ethical technology development which includes impact assessments and community jury, among others.

This shift promoted innovation and encouraged ethical considerations regarding technology's impact on society and recognizes the urgency of addressing the issue.

Material impact on Microsoft’s business processes include:

  • Judgment Call: A team activity where participants take on different roles to simulate product reviews from various stakeholder perspectives. This promotes empathy, encourages ethical discussions, and supplements direct stakeholder interactions in the product design process.
  • Envision AI: A workshop that employs real scenarios, instilling a human-centric AI approach and ethical considerations, empowering them to understand and address the impacts of their products on stakeholders.
  • Impact Assessments: Impact assessments are compulsory for all AI projects in its development process. The completed assessments undergo peer and executive review, ensuring the responsible development and deployment of AI.
  • Community Jury: A method for project teams to engage with diverse stakeholders who share their perspectives and discuss the impacts of a product. A group of representatives serve as jury members, and a neutral moderator facilitates the discussion, allowing participants to jointly define opportunities and challenges.

Additionally, Microsoft utilizes software tools aimed at understanding, assessing, and mitigating the ethical risks associated with machine learning models.

Nvidia: A case for privacy enhancing technology in AI

INDUSTRY: Technology (Healthcare)
SOURCE: Nvidia, n.d.; eWeek, 2019

Leading player within the AI solution space, Nvidia’s Clara Federated Learning provides a long-awaited solution to a privacy-centric integration of AI within the healthcare industry.

The solution safeguards patient data privacy by ensuring that all data remains within the respective healthcare provider’s database, as opposed to moving it externally to cloud storage. A federated learning server is leveraged in order to share data, completed via a secure link. This framework enables a distributed model to learn and safely share client data without risk of sensitive client data being exposed and adheres to regulatory standards.

Clara is run on the NVIDIA EGX intelligent edge computing platform. It is currently in development with healthcare giants such as the American College of Radiology, UCLA Health, Massachusetts General Hospital, as well as King’s College London, Owkin in the UK, and the National Health Service (NHS).

Nvidia provides solutions across its product offerings, including AI-augmented medical imaging, pathology, and radiology solutions.

Personal health information, data privacy, and AI

  • Global proliferation in data privacy regulations may be recent, but the realm of personal health information is most often governed by its own set of regulatory laws. Some countries with national data governance regulations include health information and data within special categories of personal data.
    • HIPAA – Health Insurance Portability and Accountability Act (1996, United States)
    • PHIPA – Personal Health Information Protection Act (2004, Canada)
    • GDPR – General Data Protection Regulation (2018, European Union)
  • This does not prohibit the injection of AI within the healthcare industry, but it calls for significant care in the integration of specific technologies due to the highly sensitive nature of the data being processed.

Info-Tech’s methodology for AI and data protection readiness

Phase Steps

1. Identify Privacy Drivers for Your Business

  • Define your privacy drivers
  • Understand data privacy principles
  • Review Info-Tech’s privacy framework

2. Evaluate AI Through a Privacy lens

  • Define your AI drivers
  • Understand AI and its applications
  • Evaluate in the context of data privacy

3. Assess Impact of Implementation and Controls

  • Review your data governance posture
  • Understand AI risk management
  • Consider privacy enhancing technologies

Phase Outcomes

  • Knowledge on privacy principles and frameworks
  • Documented list of privacy program drivers
  • Documented list of privacy objectives
  • Level-setting on understanding of privacy from core team
  • Knowledge of the different types of AI
  • Documented list of AI drivers
  • Technology-specific use cases
  • Level-setting on understanding of AI in the context of the organization from core team
  • Understand operational posture for data governance, security, and privacy
  • Assessing the privacy implications of implementing AI technology

Insight Summary

Implement responsible AI

Elevate your AI innovation by embedding privacy. As AI and privacy evolve, adapting PIAs is essential. Expanding the scope to include data governance for AI, incorporating ethical dimensions, fostering diverse stakeholder participation, and taking a continuous improvement approach to risk assessment is crucial for responsible AI implementation.

Assess the changing landscape

Learn from those who paved the way before you. Once you've determined your organization's privacy strategy, analyze various use cases specific to your industry. Assess how leaders in your sector have incorporated AI technology with privacy considerations, successfully or unsuccessfully. Draw from both sets of results and strategies to get your organization ready while eliminating unsuitable use cases.

Embrace a privacy-centric approach

Prioritize data privacy as an integral part of your organization's values, operations, and technologies in the AI-driven future. This approach is essential for responsible AI implementation. It will offer insight and awareness for aligning AI with your current processes, data landscape, regulatory requirements, and future goals. A privacy-centric approach will enable your technology to achieve compliance and trust.

Be precise

Narrow down the potential ways AI can improve existing operations in your environment in order to drive efficiencies.

Govern your data

Know your data and governance environment before you act. Scope the potential data that will be impacted and ensure appropriate controls are in place.

Blueprint benefits

IT Benefits

  • An updated understanding of the different types of AI and relevant industry-specific use cases.
  • Perspective from a privacy lens on mitigating data privacy risk through IT best practices.
  • Guidance on completion of impact assessments that validate the integration of AI technology within the organization’s environment.
  • Knowledge around core AI vendor solutions that maintain a privacy-first approach based on integration of explainability.
  • Data privacy best practices and how AI technology can support a privacy-centric environment.

Business Benefits

  • Overview of the different types of AI and how they drive business efficiency in isolation or in combination.
  • Understanding of the scope of data privacy regulations within the context of the organization.
  • Comprehensive outlook around data privacy best practices that enable effective AI integration.
  • Ability to leverage privacy as a competitive advantage in streamlining how customer data flows through the organization.

Info-Tech offers various levels of support to best suit your needs

DIY Toolkit

"Our team has already made this critical project a priority, and we have the time and capability, but some guidance along the way would be helpful."

Guided Implementation

"Our team knows that we need to fix a process, but we need assistance to determine where to focus. Some check-ins along the way would help keep us on track."

Workshop

"We need to hit the ground running and get this project kicked off immediately. Our team has the ability to take this over once we get a framework and strategy in place."

Consulting

"Our team does not have the time or the knowledge to take this project on. We need assistance through the entirety of this project."

Diagnostics and consistent frameworks are used throughout all four options.

Guided Implementation

What does a typical GI on this topic look like?

Phase 1

Phase 2

Phase 3

  • Call #1: Scope requirements, objectives, and your specific challenges.
  • Call #2: Discuss AI project pipeline.
  • Call #3: Review organization’s privacy drivers.
  • Call #4: Review organization’s AI drivers.
  • Call #5: Assess current data governance approach and privacy posture.
  • Call #6:Review and make modifications to privacy impact assessment for AI project.

A Guided Implementation (GI) is a series of calls with an Info-Tech analyst to help implement our best practices in your organization.

A typical GI is four to six calls over the course of four to six months.

Workshop Overview

Contact your account representative for more information.
workshops@infotech.com 1-888-670-8889

Day 1

Day 2

Day 3

Day 4

Day 5

Activities

Identify Privacy Drivers for Your Business

1.1 Understand personal data and the need for a privacy program.

1.2 Discuss legal, contractual, and regulatory obligations.

1.3 Understand privacy regulation and AI.

1.4 Discuss privacy and data protection by design.

1.5 Define and document program drivers.

Evaluate AI Through a Privacy Lens

2.1 Understand types of AI and its advantages.

2.2 Discuss industry applications of AI powered technologies.

2.3 Define and document AI project drivers.

2.4 Understanding the importance of data governance for AI.

2.5 Discuss privacy enhancing techniques for AI.

Assess the Impact of AI Implementation

3.1 Conduct threshold analysis for AI project.

3.2 Document details of AI governance framework, technical and business requirements, and testing methods.

3.3 Document details of data governance structure for the AI system.

3.4 Document privacy practices pertaining to the AI project.

3.5 Identify potential risks and propose potential mitigation.

Report the Impact of AI Implementation

4.1 Document details of supply chain environments.

4.2 Document security practices pertaining to the AI project.

4.3 Identify potential risks and propose potential mitigation.

4.4 Prepare PIA report.

4.5 Debrief.

Next Steps and Wrap-Up (Offsite)

5.1 Complete in-progress deliverables from previous four days.

5.2 Set up time to review workshop deliverables and discuss next steps.

Deliverables

1.Business context and drivers behind privacy program 1.Business context and drivers behind AI project 1.Completed threshold analysis determining need for a lite or full PIA 1.Completed privacy impact assessment and report

Measure the value of this blueprint

As AI technology continues to augment organizational capabilities and drive efficiency, business and IT leaders must look to integrate appropriate use cases in a responsible manner that accounts for data privacy and protection regulatory obligations.

A privacy impact assessment approach ensures organizations remain compliant and can effectively implement AI technologies in a way that applies to the specific business environment.

Info-Tech’s data privacy and AI project steps

  1. Coordinate internal stakeholders to identify privacy and AI tech drivers.
  2. Evaluate use cases and review data governance structure and data privacy program strategy.
  3. Assess the privacy implications of implementing AI technology.

“Artificial intelligence is already altering the world and raising important questions for society, the economy, and governance.”

– Brookings, 2018

Info-Tech Project Value

Note: The duration of a privacy impact assessment (PIA) can vary depending on the complexity of the project and the data involved. It can take several weeks to a few months. For the purposes of this blueprint, projects are assumed to be of moderate complexity.

12 weeks

Average duration of an initial PIA

480 hours

Average dedicated hours of external consultant’s privacy assessment of an organization

$125

Average hourly rate of external consultant for a privacy assessment

45 hours

PIA duration leveraging this blueprint

$54,375

Estimated cost savings from this blueprint.

Phase 1

Identify Privacy Drivers for Your Business

Phase 1

Identify Privacy Drivers for Your Business

This phase will walk you through the following activities:

  • Define your data privacy drivers

This phase involves the following participants:

  • Privacy officer
  • Senior management team
  • IT team lead/director
  • PMO or PMO representative
  • Core privacy team
  • InfoSec representative
  • IT representative

1.1 Define your data privacy drivers

1 hour
  1. Bring together a large group comprised of relevant stakeholders from the organization. This can include those from the following departments: Legal, HR, Privacy, Finance, as well as those who handle personal data regularly (Marketing, IT, Sales, etc.).
  2. Using sticky notes, have each stakeholder write one driver for the privacy program per sticky note. Examples include:
    • Create clear lines about how the organization uses data and who owns data
    • Clear and published privacy policy (internal)
    • Revised and relevant privacy notice (external)
    • Clarity around the best way to leverage and handle confidential data
    • How to ensure vendor compliance
  3. Collect these and group together similar themes as they arise. Discuss with the group what is being put on the list and clarify any unusual or unclear drivers.
  4. Determine the priority of the drivers. While they are all undoubtedly important, it will be crucial to understand which are critical to the organization and need to be dealt with right away.
    • For most, any obligation relating to an external regulation will become top priority. Noncompliance can result in serious fines and reputational damage.
  5. Review the final priority of the drivers and confirm current status.

Input

  • Optional: Ask core team members to brainstorm a list of key privacy program drivers and objectives

Output

  • Documented list of privacy program drivers
  • Documented list of privacy objectives
  • Level-setting on understanding of privacy from core team

Materials

  • Whiteboard/Flip charts
  • Sticky Notes
  • Pen/Marker

Participants

  • Privacy officer
  • Senior management team
  • IT team lead/director
  • PMO or PMO representative
  • Core privacy team
  • InfoSec representative
  • IT representative

Navigate AI privacy and data concerns with a comprehensive privacy impact assessment.

About Info-Tech

Info-Tech Research Group is the world’s fastest-growing information technology research and advisory company, proudly serving over 30,000 IT professionals.

We produce unbiased and highly relevant research to help CIOs and IT leaders make strategic, timely, and well-informed decisions. We partner closely with IT teams to provide everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations.

What Is a Blueprint?

A blueprint is designed to be a roadmap, containing a methodology and the tools and templates you need to solve your IT problems.

Each blueprint can be accompanied by a Guided Implementation that provides you access to our world-class analysts to help you get through the project.

Need Extra Help?
Speak With An Analyst

Get the help you need in this 3-phase advisory process. You'll receive 6 touchpoints with our researchers, all included in your membership.

Guided Implementation 1: Identify privacy drivers for your business
  • Call 1: Scope requirements, objectives, and your specific challenges.
  • Call 2: Discuss AI project pipeline.

Guided Implementation 2: Evaluate AI through a privacy lens
  • Call 1: Review the organization's privacy drivers.
  • Call 2: Review the organization's AI drivers.

Guided Implementation 3: Assess the impact of AI implementation and controls
  • Call 1: Assess current data governance approach and privacy posture.
  • Call 2: Review and make modifications to privacy impact assessment for AI project.

Authors

Safayat Moahamad

Cassandra Cooper

Contributors

  • Constantine Karbaliotis, Counsel, Nnovation LLP
  • Carlos Chalico, Partner, EY Canada
  • Amalia Barthel, Lecturer and Advisor, University of Toronto
Visit our IT Cost Optimization Center
Over 100 analysts waiting to take your call right now: 1-519-432-3550 x2019