As per an email seen by Politico, the EU Parliament’s IT department said the security of any data uploaded to AI servers, such as those used by ChatGPT, could not be guaranteed and added that the extent of what data is shared with AI firms is still being worked out.
The chamber told its members on Monday (16 February) that “built-in artificial intelligence features” had been disabled on corporate tablets following the information from the IT department.
“Some of these features use cloud services to carry out tasks that could be handled locally, sending data off the device,” the IT team said.
“As these features continue to evolve and become available on more devices, the full extent of data shared with service providers is still being assessed. Until this is fully clarified, it is considered safer to keep such features disabled.”
Features prohibited on EU lawmaker devices include writing assistants, summarising tools, webpage summarisers, and enhanced virtual assistants, according to an EU official.
Tools like email, calendars, documents, and other everyday apps were not impacted.
However, when questioned by Politico, the EU declined to detail what built-in AI features were disabled or what systems the devices operate on.
However, the EU Parliament told the publication that it “constantly monitor[s] cyber security threats and quickly deploys the necessary measures to prevent them,” but that due to the “sensitive nature” of these measures, it would not discuss their specifics.
AI has expanded the threat landscape and has made it easy for accidental breaches to occur when individuals or groups submit sensitive data to generative AI chatbots for analysis or other purposes.
Late last year, the NSW Reconstruction Authority (RA), the state agency responsible for mitigating damage from natural disasters, announced that it was aware that a “data breach” had occurred, impacting thousands of those who applied to the Northern Rivers Resilient Homes Program (RHP), which provides financial assistance to those looking to improve the flood resistance of their homes.
“The breach occurred when a former contractor of the RA uploaded data containing personal information to an unsecured AI tool which was not authorised by the department,” the NSW government said.
“There is no evidence that any information has been made public; however, Cyber Security NSW will continue to monitor the internet and the dark web to see if any of the information is accessible online.”
According to an NSW government release, the former contractor posted 10 columns and over 12,000 rows of data from a Microsoft Excel spreadsheet into ChatGPT.
Based on “early forensic analysis”, as many as 3,000 people may have been impacted, with data exposed to ChatGPT including names, addresses, email addresses, phone numbers, and personal and health data.
Daniel Croft