CSP Global Blog

AI Readiness – Have you Completed a Risk Assessment? (Part 1)

Getting your Bearings

Welcome to part of one of six in our AI Readiness security blog series. In this first edition we will be discussing risk assessments.

In the digital cosmos, Artificial Intelligence (AI) is evolving with a strong gravitational force, driving innovation and efficiency across industries. Among the many advancements in AI, Generative AI jumps out for its ability to create content, from text and images to music and code. However, with all the buzz around these new developments it is important to first stop and get our bearings.

In Australia the statistics around cybercrime – especially the impact on small businesses – paint a very sobering picture. Here are some stats from the 2023/24:

  • Approximately 43% of cyber attacks target small to medium-sized businesses (SMBs)
  • In the 2022-23 financial year, there were nearly 94,000 reports of cybercrime submitted to ReportCyber, marking a 23% increase compared to the previous year. This equates to one report every 6 minutes1.
  • The cost of cybercrime to businesses increased by 14% compared to the previous financial year. Small businesses experienced an average financial loss of $46,000, while cybercrimes cost medium businesses an average of $97,200, and large businesses an average of $71,600.
  • Approximately 200,000 home office and small businesses in Australia are vulnerable to cyber threats

The Promise of Generative AI

88% of organizations face challenges with data accuracy, integrity, and excess, which are critical for AI success.”

Generative AI has the potential to revolutionize how businesses operate. It can automate content creation, enhance customer interactions, and even assist in complex problem-solving and reasoning. The possibilities are vast, but so are the risks if cybersecurity and data security are not prioritized. This is why the first step in taking our bearings should be to assess our current maturity and consider the importance of cybersecurity across the following:

  1. Protecting Sensitive Data: Generative AI systems often require access to vast amounts of data to function effectively. This data can include sensitive information such as customer details, financial records, and proprietary business information. Ensuring this data is protected from cyber threats is crucial to maintaining trust and compliance with regulations.
  2. Preventing Data Breaches: Cyberattacks are becoming increasingly sophisticated, and AI systems can be prime targets due to the valuable data they process. A data breach can have severe consequences, including financial loss, reputational damage, and legal repercussions. Robust cybersecurity measures help prevent unauthorized access and mitigate the impact of potential breaches.
  3. Ensuring System Integrity: Cybersecurity is not just about protecting data; it’s also about ensuring the integrity of AI systems. Malicious actors can manipulate AI algorithms, leading to incorrect outputs and potentially harmful decisions. By securing AI systems, we can maintain their reliability and accuracy.

Baseline

“While 80% of organizations believe their data is ready for AI, over half (52%) encounter significant issues with data quality and organization during implementation2.”

We’ve developed a method to assess your maturity posture using a questionnaire aligned with the CIS framework. This tool evaluates and reports your maturity level based on three core metrics for both awareness and readiness:

  • Overall Cybersecurity Baseline
  • Human Operated Ransomware
  • Data security and Risk from Company Insiders

There are a number of questions within this assessment that will provide deep insights into your AI readiness. Here are some examples:

  1. My organization has implemented an automated system to identify, classify and label data containing sensitive, confidential, or privacy information.
  2. My organization has implemented a solution to monitor and control sharing of organizational data from document repositories.
  3. My organization has implemented a system to monitor and install updates across all devices (endpoints, network, and IoT) and operating systems.
  4. My organization has deployed a host-based intrusion prevention solution. (example, Endpoint Detection and Response (EDR) client or host-based IPS agent).
  5. My organization has implemented an automated system that can detect and respond to common risky scenarios such as data, leaks, data theft and corporate policy violations.

Based on these discoveries, we can then start to build-out a roadmap and prioritize what steps should be taken next:

Microsoft’s Commitment to AI Readiness

Microsoft is committed to helping businesses harness the power of Generative AI while prioritizing cybersecurity and data security. This approach includes:

  • Comprehensive Security Solutions: Microsoft offers a family of security solutions, including Microsoft Defender and Microsoft Sentinel, designed to protect data and systems from cyber threats. These tools leverage AI to detect and respond to threats in real-time, ensuring continuous protection.
  • Data Protection Frameworks: Microsoft provide robust data protection frameworks that help businesses comply with regulations and maintain data privacy. Their solutions include encryption, access controls, and data loss prevention measures.
  • Ethical AI Principles: Microsoft is dedicated to developing AI technologies that are ethical and transparent. They adhere to principles that ensure AI systems are fair, reliable, and secure, fostering trust and accountability.

 

Preparing for the Future

As businesses prepare to implement Generative AI, it is essential to prioritize cybersecurity and data security. By doing so, you can unlock the full potential of AI while safeguarding data and systems.

With the breadth of capabilities across the Microsoft Security portfolio, our team can help you:

  • Discover potential risks associated with AI usage, such as sensitive data leaks and users accessing high-risk applications.
  • Protect the AI applications in use and the sensitive data being reasoned over or generated by them, including the prompts and responses.
  • Govern the use of by retaining and logging interactions, detecting any regulatory or organizational policy violations, and investigating incidents once they arise.

In conclusion, the successful implementation of Generative AI hinges on robust cybersecurity and data security measures. By prioritizing these aspects, businesses can harness the transformative power of AI while ensuring the safety and integrity of their data and systems.

Together, we can build a secure and innovative future!

We hope this first edition of our blog series has been informative. If you would like any more information about our cybersecurity assessment please contact us here:

References:

https://www.avepoint.com/shifthappens/reports/artificial-intelligence-and-information-management-report-2024

https://www.savvy.com.au/media-releases/cybercrime-in-australia-report/

https://www.cyber.gov.au/about-us/view-all-content/news-and-media/2023-ASD-cyber-threat-report

https://kmtech.com.au/information-centre/top-cyber-security-trends-and-statistics/

Discover, protect, and govern AI usage with Microsoft Security

Now Available: the Copilot for Microsoft 365 Risk Assessment QuickStart Guide – Microsoft Community Hub