OpenAI just launched an intelligent agent based on ChatGPT. Exciting. But the new agentic era poses head-scratching questions around identity and authentication. Tim Green, Programme Director for MEF ID and Data, outlines the big issues.
I remember the day I first heard about intelligent agents. I was on stage at a MEF event probably six or seven years ago. The panellists started talking about a future in which bots would act on our behalf – organising our schedules, finding info, making bookings, offering suggestions. At some point, they would start ‘talking’ to other digital agents in some parallel virtual sphere.Â
My mind was blown. It seemed feasible, yet also unfathomable.Â

Well, here we are. The future has arrived. In January, OpenAI started testing Operator, a ChatGPT agent that performs tasks for you via its own browser.
Then, last month, OpenAI confirmed that Operator had been rolled into ChatGPT. It said the new product – ChatGPT Agent – will combine “Operator’s ability to interact with websites, deep research’s skill in synthesising information, and ChatGPT’s conversational fluency.”
To give an example of the difference between the two products, ChatGPT will compose an email for you, whereas ChatGPT Agent will monitor your inbox, find messages that need attention, draft contextual responses, suggest follow-up meetings and more.
For now, we are still in the early stages of defining what agentic ID should look like. But we may not have the luxury of time. It’s a well known fact that Chat GPT took just two months to reach 100 million users. That’s how fast this tech spreads.“
It’s all very exciting. But from an identity point of view, a world of intelligent agents raises huge and consequential questions such as:
- How do we assign reliable, immutable identities to digital entities?
- How do we make these identities persistent over time?
- How do we link agent IDs to their human ‘owners’?
- How should humans authenticate their agents?
- Is it OK to share user names and passwords with a digital agent?
- How can users trust that agents will keep data private?
- How can users set parameters for what agents can and can’t do?
- How should agents declare their identities to other users?
- How should agents keep records of their tasks and data collection?
- How can systems identify and defend against scam and fraudulent agents?
- What is the legal status of decisions made by agents (since standard Ts & Cs assume a human party)?
- If two identical copies of an AI system are created, are they one agent or two?
As I stated earlier, in a possible agentic future, millions of consumer bots will interact and transact with millions of organisational bots. In this world, the challenges facing enterprises will be especially onerous. Enterprises have regulatory duties around personal data and audit trails. Can these duties translate to an agentic context? Imagine a healthcare agent accessing patient data. How can it seek permission for this? What human oversight will be required?
And then there’s the privacy/security question. A few weeks ago, I wrote a post about the appearance of private information in AI training models. This was in response to a paper which revealed that millions of images of passports, credit cards, birth certificates, and other personally identifiable information (PII) had been found in an open-source training set.
The fact is, once a model subsumes private data it’s almost impossible to find and delete it. Of course, it’s possible to tackle the problem by blocking AI crawlers from scraping personal content without permission. But this is not a tactic with any relevance here. The whole point of intelligent agent AI is that people submit their private details into it.
For now, we are still in the early stages of defining what agentic ID should look like. We don’t have clear answers to the above questions. But we may not have the luxury of time. It’s a well known fact that Chat GPT took just two months to reach 100 million users. That’s how fast this tech spreads.
We need shared language to articulate the challenges before we even think about legal and technical frameworks. As director of MEF’s ID and Data programme I think our members can play a part in this at least articulating the challenges. For this reason I want to develop a positioning document outlining the issues and exploring potential ways forward.
I will be approaching members for feedback over the coming weeks. Please get in touch if you have ideas to share. Or get your bot to do it for you.
Find out more about the themes discussed –Â Join the MEF ID & Data Interest Group.