CyberTalk

Two-thirds of organizations are not prepared for AI risks

AI concept art

EXECUTIVE SUMMARY:

Artificial intelligence is the new epicenter of value creation. Employees across industries are ecstatic about deploying easily accessible generative AI tools to elevate the quality of their output and to improve efficiency.

According to the latest research, 60% of employees use generative AI tools to augment their efforts. Roughly 42% of organizations say that they permit use of generative AI in the workplace, although only 15% retain formal policies governing the everyday use of AI.

And among organizations that permit the application of generative AI to everyday tasks, research indicates that only a third keep an eye on the ethical, cyber security and data privacy risks inherent in the technology.

Explaining the lack of suitable AI policies

The gap between AI use and AI governance can be attributed to the fact that senior organizational leaders aren’t particularly comfortable with AI technologies – and nor are the department heads and team leads around them.

In addition, AI is a fast moving field. The policies that made sense three months ago may no longer be relevant now or may not be relevant three months into the future. Most organizations lack tiger teams that can spend all day iterating on policies, although developing AI governance teams may quickly become a competitive advantage.

For organizations with existing AI policies, employee enthusiasm surrounding the benefits of AI may supersede their interest in adhering to top-down policies, especially if there are no incentives for doing so or consequences for flouting the rules.

Addressing AI risks (in general)

As noted previously, some organizations are struggling with top-down AI policy implementations. To overcome this challenge, experts suggest that organizations build AI policies, specifically those concerning ChatGPT and similar tools, from the ‘ground-up’. In other words, leaders may wish to solicit ideas and feedback from all employees across the organization.

Organizations may also want to leverage repudiated, industry-backed frameworks and resources for the development of responsible and effective AI governance policies. Resources to review include NIST’s AI Risk Management Framework and ISACA’s new online courses.

Addressing AI risks (in cyber security)

In relation to cyber security, the AI risks are numerous and varied. There are risks stemming from employee data inputs to AI models, risks concerning ‘data poisoning’, risks related to evolving threats, risks related to bias due to the opacity of models, and more.

For incisive and influential resources designed to help cyber security leaders mitigate risks pertaining to AI, check out the following:

Despite the aforementioned resources, leaders may still feel under-prepared to make policy decisions surrounding cyber security, data security and AI. If this sounds like you, consider AI governance and training certificate programs.

Building a secure future together

Check Point understands the transformative potential of AI, but also recognizes that, for many organizations, security concerns abound. Rapid AI adoption, fueled by employee enthusiasm, can leave security teams scrambling to keep up.

This is where Check Point’s Infinity Platform comes in. The Check Point Infinity Platform is specifically designed to address the unique security challenges presented by the “AI revolution.” It empowers organizations to thrive amidst uncertainty and tough-to-keep-up with regulations.

Prioritize your cyber security. Discover a comprehensive and proactive approach to safeguarding systems that use AI with a system that is powered by AI. Learn more here.

To receive compelling cyber insights, groundbreaking research and emerging threat analyses each week, subscribe to the CyberTalk.org newsletter.

Exit mobile version