News

OpenAI's Bespoke Chatbots Accidentally Reveal Insider Information, Should We Worry?

OpenAI's Bespoke Chatbots Accidentally Reveal Insider Information, Should We Worry?

Austin Jay

In November, OpenAI launched GPT's capabilities, allowing anyone to develop and operate their respective AI bots (customs GPTs).

These customized GPTs involve various kinds of activities, such as suggesting ideas about telecommuting, research analysis, and transforming people's images.

However, despite their different roles, they create privacy-related issues.

Security experts investigating these chatbots discovered gaps that can affect personal information. The file leaks may originate from the first set of GPT production manuals released that might contain confidential information and highly valued proprietary details.

Some essential secrets of the operation of custom GPT are likely to become known due to unintended disclosure. Also, recently, Amazon releases their very own version of chatbot, offering it to their AWS customers.

Open AI Risks
(Photo : Pexels/cottonbro studio )

Researchers Doing Their Best in Extracting Possible Risks Using AI

The focus of Jiahao Yu as a computer science researcher is how these privacy problems may lead to dangerous consequences for many people.

As a result, the success of Yu's experiment implied that more than two hundred modified GPTs could certainly reveal personal data like passwords, credit cards, and social security numbers.

Despite being simple to create, their fundamental risks cannot be ignored. As such, the users can construct these GPTs from a high probability of use in offline or online applications via this OpenAI-built platform.

On the one hand, app creation is simple and convenient, but on the other, the problem arises with data privacy for users.

Giving ChatGPT instructions about its capabilities and limitations is the first step in creating a tailored GPT.

For instance, a tax bot might be designed to respond solely to inquiries about taxes in the US. Users can enhance their chatbots' functionality and range of tasks by adding specific documents or third-party APIs.

Secure development practices are essential because these conveniences also raise the risk of unintentional data loss.

Although they frequently store unsensitive data, custom GPTs can also hold sensitive information. Vulnerabilities, such as prompt injections, data exposure, and API compromise risk are highlighted by Yu and Polyakov.

While OpenAI seeks to improve security, it cannot wholly prevent prompt injections, highlighting the need for developers to be extra vigilant and take preventative action.

In another report, researchers successfully extracted training data from ChatGPT despite its alignment to prevent data leaks.

While smaller models emitted memorized data less than 1% of the time, ChatGPT did so much more frequently, approximately 150 times. By prompting it with a specific word-repeat attack, it revealed memorization.

To verify recovered data, they created an index using 10TB of internet data, cross-referencing ChatGPT-generated text with pre-existing online content.

Any matching sequences are considered memorized.

This method identified paragraphs matching internet data word-for-word, highlighting the model's data recovery capability. Despite safeguards, models can possess hidden capabilities, necessitating strategic questioning for their disclosure.

Also Read: ChatGPT Mobile: How To Use New AI Apps Securely

Amazon's New AI Chatbot 'Q'

A recent story says that Amazon launched "Q," a chatbot in a box for businesses targeted at Amazon web services (AWS) customers.

As an opponent of ChatGPT, Bard, and Copilot, "Q" enhances privacy by charging $20 per month for the controlled use of corporate data.

Speaking on misinformation AI creates, Amazon CEO Andy Jassy pointed out that solid access controls should be a priority.

In this regard, Andrew Selipsky, CEO of AWS, emphasized that the chatbot's dependency on some specific data was one of the mechanisms to balance with MS relying too much on open AI.

At this time, the stability of Selisky's corporation and the AI reliability made him recommend that many other AI suppliers be considered in the announcement, pointing towards recent instabilities with Open AI leadership.

The release of "Q," which involves heightened security protocols to protect user privacy by restricting third-party access to the database, is a significant development for the Generative AI market.

Related Article: Amazon's 'AI Ready' Initiative to Offer Free AI Skills Training for Millions of People by 2025

© Copyright 2020 Mobile & Apps, All rights reserved. Do not reproduce without permission.

more stories from News

Back
Real Time Analytics