Smarter AI Assistants Could Make It Harder to Stay Human

0 74

Researchers and futurists have been talking for decades about the day when intelligent software agents will act as personal assistants, tutors, and advisers. Apple produced its famous Knowledge Navigator video in 1987. I seem to remember attending an MIT Media Lab event in the 1990s about software agents, where the moderator appeared as a butler, in a bowler hat. With the advent of generative AI, that gauzy vision of software as aide-de-camp has suddenly come into focus. WIRED’s Will Knight provided an overview this week of what’s available now and what’s imminent.

I’m concerned about how this will change us, and our relations with others, over the longer term. Many of our interactions with others will be mediated by bots acting in our stead. Robot assistants are different from human helpers: They don’t take breaks, they can instantly access all the world’s knowledge, and they won’t require paying a living wage. The more we use them, the more tempting it will become to turn over tasks we once reserved for ourselves.

Right now the AI assistants on offer are still unrefined. We’re not yet at the point where autonomous bots will routinely take over activities where screw-ups can’t be tolerated, like booking flights, making doctor’s appointments, and managing financial portfolios. But that will change, because it can. We seem destined to live our lives like long-haul airline pilots—after setting a course, we can lean back in the cockpit as AI steers the plane, switching to manual mode when necessary. The fear is that, eventually, it might be the agents who decide where the plane is going in the first place.

Doomerism aside, all of us will have to deal with someone else’s supersmart and possibly manipulative agents. We’ll turn over control of our own daily activities and everyday choices, from shopping lists to appointment calendars, to our own AI assistants, who will also interact with the agents of our family, friends, and enemies. As they gain independence, our automated helpers may end up making decisions or deals on our behalf that aren’t good at all.

For an upbeat view of this future, I consult Mustafa Suleyman. A cofounder of AI startup DeepMind, now the heart of Google’s AI development, he’s now the CEO of Inflection.ai, a company developing chatbots. Suleyman has also recently taken residency on The New York Times bestseller list for his book The Coming Wave, which suggests how humans can confront the existential perils of AI. Overall, he’s an optimist and of course has a rosy outlook about software agents. He describes the bot his company makes, Pi, as a personal “chief of staff” that provides not only wisdom but empathetic encouragement and kindness.

“Today Pi is not able to book you restaurants or arrange a car or, you know, buy things for you,” Suleyman says. “But in the future, it will have your contractual and legal proxy, which means that you’ve granted permissions to enter into contracts on your behalf, and spend real money and bind you to material agreements in the real world.” Also on the road map: Pi will make phone calls on its owner’s behalf and negotiate with customer service agents.

That seems fair, because right now, too many of those service agents are already bots, and—maybe by design?—not open to reasonable arguments that their corporate employers screw over their own customers. Inevitably, we’ll be launching our AIs into negotiations with other AIs in all areas of life. Suleyman acknowledges that we don’t want those bots to get too cozy with each other or interact in ways not open to human inspection. “We actually want AI-to-AI communication to be limited to plain English,” says Suleyman. “That way, we can audit it.”

Source link

Denial of responsibility! YoursTelecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave A Reply

Your email address will not be published.