Highlights:

  • A cloud-based network detection business, ExtraHop Networks Inc., recently revealed increased visibility for organizations whose employees utilize generative artificial intelligence technologies like OpenAI LP’s ChatGPT.
  • ExtraHop reduces risk. The Reveal(x) tool monitors all OpenAI domain-connected devices and users through its network detection and response platform.

ExtraHop Networks Inc., a cloud-based network detection company, recently announced increasing visibility for organizations whose employees use generative artificial intelligence tools such as OpenAI LP’s ChatGPT.

To accomplish this, it launched a new tool called Reveal(x) that assists businesses in better understanding their risk exposure and ensuring that generative AI is used in accordance with AI policies.

ExtraHop explained in a blog post that organizations have legitimate concerns about their employees using AI-as-a-service tools because of the risk of data and intellectual property breaches while sharing data with their employees. According to ExtraHop, employees may not realize that they are effectively making that information public by sharing information with these tools.

The company said, “The immediate IP risk centers on users logging into the websites and APIs of these generative AI solutions and sharing proprietary data. However, this risk increases as local deployments of these systems flourish, and people start connecting them to each other. Once an AI service is determining what data to share with other AIs, the human oversight element currently makes that determination is lost.”

ExtraHop seeks to mitigate this risk. The Reveal(x) utility is integrated into its existing network detection and response platform and aids in monitoring all OpenAI domain-connected devices and users. Doing so makes it possible to determine which employees utilize AI services and how much data they transmit to the respective domains. This enables security teams to evaluate the risk associated with each individual’s use of generative AI.

Reveal(x) uses network packets as its primary data source for monitoring and analyzing AI utilization, allowing it to display precisely how much data is sent and received between OpenAI domains. Thus, security executives can assess what lies within an acceptable range and what may indicate an IP loss.

For instance, a user transmitting basic queries to ChatGPT would only submit bytes or kilobytes of data. However, if a device begins transmitting megabytes of data, it indicates that the employee may be providing confidential information along with their query. Reveal(x) will assist in identifying the type of data being sent and the individual files, provided the data is not encrypted.

Chris Kissel, an analyst at International Data Corp., stated that the loss of intellectual property and consumer data is one of the greatest dangers associated with using AI-as-a-service tools.

Chris Kissel said, “ExtraHop is addressing this risk to the enterprise by giving customers a mechanism to audit compliance and help avoid the loss of IP. With its strong background in network intelligence, ExtraHop can provide unparalleled visibility into the flow of data related to generative AI.”