Assistants based on artificial intelligence (AI), such as Apple’s Siri and Amazon’s Alexa, have been around for more than a decade. An AI assistant can be defined in many ways. According to an April 2024 report, Google DeepMind defines “an AI assistant … as an artificial agent with a natural language interface, the function of which is to plan and execute sequences of actions on the user’s behalf across one or more domains and in line with the user’s expectations”.
What are AI agents?
The next-generation AI assistants are called AI agents (AIA) and are set to surpass their predecessors in ability as well as efficiency. AIAs can be broadly classified into three categories.
Reactive agents are first-generation AI agents developed to respond to specific inputs or commands. They follow predefined rules and perform tasks limited in scope as they can’t learn anything new and lack the ability to adapt. Learning agents were enabled by machine learning, which enabled them to learn from experience. They have better abilities, such as pattern detection and data analysis, and can improve their performance over time. Finally, cognitive agents can reason, analyse, and plan. They have cognitive skills because they can learn from their environment, and adapt and make decisions based on algorithms and their own ‘knowledge’. These agents use techniques including natural language processing, computer vision, and deep learning to perform tasks. The present generation of AIAs are cognitive agents. AIAs can perform multiple functions as users’ agents or autonomously (that is without instructions or user intervention). They can be integrated with the ‘internet of things’, allowing them to connect with multiple devices and their sensors and collect and analyse data in real-time.
Cognitive AIAs can also ‘understand’ human speech and language and with this skill can perform tasks that require multiple proficiencies. For example, they can plan a trip after listening in on a user’s phone calls and reading their emails, understanding their preferences, and parsing their previous travel experiences.
Recently, a Bengaluru-based startup launched an AIA that could autonomously handle items in a warehouse. It receives inputs as voice commands and responds with real-time decisions.
Companies and research facilities have also deployed AIAs to drive autonomous vehicles and to guide financial investments and treatment plans. A tool called Orby AI automates repetitive tasks while 4149 AI collaborates with humans inside apps like Slack and Notion to improve their productivity.
In sum, cognitive AIAs are not limited to their training data, are able to acquire new knowledge without human intervention, and can integrate with other systems. In turn, they enable personalisation by tailoring their responses to users’ preferences and needs. But in doing so, cognitive AIAs also pose many risks.
Challenges posed by AIAs
In particular, cognitive AIAs highlight concerns over accountability, liability, and responsibility. Humans’ increasing reliance on AIAs may also render them more vulnerable. For example, when an AIA plans a user’s travel, it accesses and digests vast amounts of information about the user’s plans, schedule, and financial instruments. In turn, the companies that build and offer such AIAs to users must explicitly protect users’ privacy.
Since AIAs can learn and adapt, they can also develop and use hindsight. Such hindsight might remain sensitive to users’ needs as well as moral principles; prioritise users’ safety; and be able to navigate the responsibility to be of help without getting in the way of human autonomy and creativity.
Developers must also incorporate mechanisms that protect the AIAs from being manipulated by malafide actors, or at least keep the manipulated AIAs’ effects from affecting users’ data.
On agency and liability
Even as people adopt AIAs to help with more and more tasks, many unresolved legal and ethical issues persist. We obviously need better safety measures but they alone won’t suffice. Since AIAs can be manipulated for malicious purposes, we need to constantly monitor them so they don’t harm the user, which in turn raises important questions about accountability that law and regulations must answer. For example, in the absence of a legal recognition of AI’s personhood, the law won’t admit AIAs’ intentions as being distinct from the user’s intentions.
In fact, while we call them “agents”, AIAs possess no agency in the eye of the law. However it’s possible to argue that their liability lies with its maker or a corresponding service provider. For example, earlier this year, a court held that Air Canada was liable after a Canadian man sued the airline for being misled by a chatbot on the airline’s website about air fares.
Similarly, Yale University legal scholars Ian Ayres and Jack Balkin argued in a June 2024 article, “Holding AI agents to objective standards of behaviour in turn means holding the people and organisations that implement these technologies to standards of liability and reasonable care. A legal regulation of AI may require companies to internalise the costs of the risks they impose on society.”
Similarly, Catrin Misselhorn, who studies the philosophy of AI at the University of Gottingen, contended in a 2022 book that it’s unfair to expect AIA users to assume all the responsibility for an AIA’s misdeeds and that part of the blame lies with the programmers whose algorithms guided the AIA’s decisions .
Even more fundamentally, Erez Firt, of the University of Haifa and Israel Institute of Technology, wrote in a February 2024 paper that even ‘artificial moral agents’ with sufficient autonomy and an understanding of human morals shouldn’t be expected to develop human morals themselves.
Taken together, it should be clear that in many ways the issues surrounding the regulation of AIAs can’t be separated from those surrounding the regulation of AI itself. With AIAs being developed for more labour-intensive sectors, we need a nuanced approach to address responsibility and liability on their part.
Neethu Rajam is associate professor of intellectual property and technology law, National law University Delhi. Krishna Ravi Srinivas is adjunct professor of law, NALSAR University of Law Hyderabad; consultant, RIS, New Delhi; and associate faculty fellow, CeRAI, IIT Madras.