How Trustworthy Is Your Use of AI?


When you lose other people’s trust, it’s often difficult to regain it. Or impossible.

As CIOs and other IT decision-makers orchestrate their companies’ use of ChatGPT and other generative AI programs, they need to ensure their AI initiatives don’t suddenly implode. A poorly managed generative AI effort could lead a company leadership to halt all AI projects or, worse yet, unilaterally give the thumbs down to most or all generative AI uses.

A case in point: Earlier this month, Samsung banned the use of ChatGPT and other generative IT tools after the technology company discovered that, on three occasions, staff members had uploaded internal source code to ChatGPT’s servers for code optimization and debugging. In a different incident, a Samsung employee shared recorded meeting transcripts with ChatGPT and asked the program to write meeting minutes.

Uploading proprietary source code or potentially sensitive business information to external servers operated by an AI provider poses obvious business risks. A company can’t control access to its information. Nor should a company reasonably expect an AI provider to maintain 100% security over their servers, which is probably impossible. About a month ago, ChatGPT temporarily exposed some chat histories, and possibly payment information, to other users, due to a bug.

AI Trust, Risk, and Security Management

CIOs and other IT leaders should heed the advice of Gartner, which recently recognized “AI Trust, Risk, and Security Management (AI TRiSM)” are one of the leading strategic technology trends for 2023.

“AI requires new forms of trust, risk and security management that conventional controls don’t provide,” Gartner noted in its “10 Top Strategic Technology Trends 2023” report. “New AI TRiSM capabilities ensure model reliability, trustworthiness, security and privacy.”

Gartner urges companies to embrace AI trust, risk, and security management because it “optimizes trust” and “drives better outcomes in terms of AI adoption, achieved business goals, and user acceptance.”

The payoff? “By 2026, organizations that operationalize AI transparency, trust and security will see their AI models achieve a 50% result improvement in terms of adoption, business goals and user acceptance,” according to Gartner.

Not surprisingly, many business executives believe generative AI have will a major impact on their business in the next three to five years, according to a new KPMG survey of 225 executives at U.S. companies with revenues over $1 billion.

Looking Ahead

If your company is at the start of your generative AI journey, take the first step of selecting an individual or team that will organize and spearhead your company’s generative AI initiatives. According to the KPMG survey, 68% of organizations have yet to take this necessary first step.

And IT leaders are well-positioned to be that individual or to set up a team for success.

As for colleagues who might question the need for such an individual or team, they would do well to listen to Avivah Litan, a Gartner Distinguished VP Analyst, who points out that “AI threats and compromises (malicious or benign) are continuous and constantly evolving, so AI TRiSM must be a continuous effort, not a one-off exercise.”

Share Button