Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

AI Regulations that Privacy and Cybersecurity Teams Need to Know

June 26, 2024 No Comments

by Jeff Broth

Handling IT and data management regulations is nothing new for security and privacy compliance officers, but artificial intelligence (AI) tools and technologies are opening up entirely new areas of concern. The compliance burden for AI tools is mushrooming rapidly, due to a number of factors.

Governments and global organizations are responding to serious concerns about the use of AI technologies. Consumers are worried that AI could abuse their data, so enterprises need to stay ahead of the sharp edge of legislation in order to maintain customer trust, and AI applications have different vulnerabilities from existing SaaS tools.

A company’s current security and privacy frameworks might not be effective for an AI tech stack, and may need to be revised. In May, Samsung banned all its employees from using any generative AI tools on work devices, following an incident when proprietary data was leaked to ChatGPT. Many others have expressed concern about data privacy and copyright violations.

International, national, and regional regulations are increasingly appearing on the scene. Over 1,600 policy initiatives, including laws as well as guidelines limiting AI development or shaping AI’s evolution, have already been passed in 69 countries and the EU. The last year alone saw significant legislation become law in the EU, China, and the US, among others, and more is likely to arrive by the end of 2024.

It’s vital for compliance teams to be aware of what is coming down the line, and prepare to respond and conform to upcoming requirements. Here are the main AI regulation trends that your GRC teams need to know.

Why Does AI Need So Much Regulation?

Unregulated AI could cause a range of dangers on an enormous scale. Deepfakes might affect the outcome of elections, users could easily obtain clear instructions to manufacture a dirty bomb, and serious health inequities can arise. Research has shown that AI-powered diagnostic tools miss liver disease twice as often in women as in men, for example. Organizations need tools like Fairlearn to assess and mitigate bias issues in AI systems.

AI regulations are designed to keep user data safe and confidential, and to limit the autonomy that AI systems have to impact the real world. AI models are non-transparent, raising the risks of hidden bias and discrimination. Clear and comprehensive regulations are the only way to mitigate these risks and prepare the ground for trust and accountability between users, developers, and stakeholders.

What’s more, unregulated AI could worsen economic inequities and cause unprecedented societal strife. We need laws that ensure fair compensation to authors and bloggers whose data is used to train algorithms that bring wealth to a few individuals, and deliver income to people displaced or fired as a result of AI-powered automation. It’s estimated that AI could replace 800 million jobs, or 30% of the global workforce, by 2030.

The internet industry today is a cautionary tale of the need to put regulations in place from the very beginning. Antitrust laws were too slow to keep up with the evolving industry. By the time such laws were enacted, there was no way for new companies to obtain the data and compute power to rival corporations like Microsoft or Google.

AI Regulations Are Extensive and Burgeoning

Numerous regulations and policies have already been passed to bring order and stability to the AI vertical. In the US, for example, the Biden administration passed an executive order last October on safe, secure, and trustworthy AI, including privacy and cybersecurity issues. The country has many sector-specific and local laws and regulations as well, such as Colorago’s Algorithm and Predictive Model Governance Regulation and New York City’s Local Law 144.

In Europe, the EU passed the landmark AI Act, which regulates the development and use of

foundational models, requires transparency about model training, and sets rules for data collection and usage. Canada established the Pan-Canadian AI Strategy and the Canadian AI Ethics Council, to advocate for responsible AI development and address ethical AI issues, and passed the Personal Information Protection and Electronic Documents Act (PIPEDA) to govern the collection, use, and disclosure of personal data through AI technologies.

Chinese laws require any foundation model to be registered with the government before it can be released to the public, while Australia passed the National Artificial Intelligence Ethics Framework to ensure that AI technologies are developed ethically. South Korea and Thailand have both passed AI bills; New Zealand has issued a policy for Generative AI Guidance for the Public Sector; and Japan, Rwanda, Nigeria, and South Africa have all drafted national AI strategies or agreements.

Additionally, many general data privacy laws include AI technologies within their frameworks, like the GDPR and the Digital Markets Act. With so many different geo-dependent regulations and frameworks, Cypago offers a solution that opens up visibility into every use of AI in your ecosystem and products, making it possible to adjust AI compliance levels for different audiences and geographies. Using Cypago, you have built-in support for AI governance frameworks like NIST AI RMF and ISO/IEC 42001, and you can also create custom frameworks based on risk analyses executed via the platform.

Having the ability to tackle AI compliance on an agile basis is critical, because several more laws are in the pipeline. For example, the US Congress is considering several proposals that address transparency, deepfakes, and platform accountability, while the EU is working on the AI Liability Directive to deliver financial compensation for people who have been harmed by AI.

The African Union is expected to release a continent-wide AI strategy this year; Australia’s AI taskforce is likewise due to publish safeguards for governmental AI use; and China announced that it was working on a single, comprehensive AI law for the country.

Will AI Regulatory Approaches Diverge or Harmonize?

It’s unclear whether the AI regulatory universe will unite or fragment. Motivations and methods are divided. In the EU, the primary concern is to protect users and deliver transparency, while the US is focused on supporting innovation, China is concentrating on state control over information, and African countries prioritize competitiveness with other markets.

In the EU and China, the inclination is towards binding laws enforced with penalties, whereas the US, so far, seems to prefer non-mandatory recommended best practices.

Opinion is also split about the benefits of a global standard. It could resolve compliance complexity and make it easier for companies that use AI to work across borders. Like other major issues such as climate change and pandemic, no single nation can contain the risks of AI alone. The EU’s AI Act may eventually become globally accepted, as was the case with GDPR, thanks to its heft as a massive international market.

On the other hand, global standards could stifle innovation and squash emerging businesses. Startups don’t have the resources to comply with AI laws that require resource-heavy oversight, but if regulation varies geographically, early-stage AI developers could innovate in one region before gaining the capabilities for global compliance. Singapore, Philippines, South Korea, Japan, and India have expressed these concerns, and refrained from aligning with the AI Act.

This disunity means that enterprises need an AI legislation tracker tool, like the International Association of Privacy Professionals’ Global AI Law and Policy Tracker, in order to remain aware of new regulations and the laws that apply in different regions.

It’s also not yet evident where the burden of compliance will fall. It may rest mainly on AI developers, engineers building foundational models, and/or on government contractors. Compliance may vary on a regional, municipal, or national level, and different industries will likely see different levels of regulation. For example, the wider public is demanding more transparency in high-risk industries like healthcare and finance.

Meanwhile, international organizations like the UN, OECD, and G20 are creating working groups, advisory boards, and principles aoun AI. We may see gradual harmonization leading to a broad consensus, rather than a single framework that’s adopted to set worldwide standards.

AI Compliance Requires Increasing Investment

Although it’s hard to predict how AI regulations will evolve, it’s clear that the compliance burden isn’t going to ease anytime soon. Forward-thinking enterprises that want to take advantage of AI technologies need to plan ahead for the tools and strategies that will streamline AI compliance.

Sorry, the comment form is closed at this time.

ADVERTISEMENT

WomeninTech