A.I – Apex News https://www.apexnewslive.com Thu, 04 Jul 2024 17:24:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 https://www.apexnewslive.com/wp-content/uploads/2022/11/cropped-Group-14-150x150.jpg A.I – Apex News https://www.apexnewslive.com 32 32 Ray Kurzweil Still Says He Will Merge With A.I. https://www.apexnewslive.com/ray-kurzweil-still-says-he-will-merge-with-a-i/ https://www.apexnewslive.com/ray-kurzweil-still-says-he-will-merge-with-a-i/#respond Thu, 04 Jul 2024 17:24:14 +0000 https://www.apexnewslive.com/ray-kurzweil-still-says-he-will-merge-with-a-i/

Sitting near a window inside Boston’s Four Seasons Hotel, overlooking a duck pond in the city’s Public Garden, Ray Kurzweil held up a sheet of paper showing the steady growth in the amount of raw computer power that a dollar could buy over the last 85 years.

A neon-green line rose steadily across the page, climbing like fireworks in the night sky.

That diagonal line, he said, showed why humanity was just 20 years away from the Singularity, a long hypothesized moment when people will merge with artificial intelligence and augment themselves with millions of times more computational power than their biological brains now provide.

“If you create something that is thousands of times — or millions of times — more powerful than the brain, we can’t anticipate what it is going to do,” he said, wearing multicolored suspenders and a Mickey Mouse watch he bought at Disney World in the early 1980s.

Mr. Kurzweil, a renowned inventor and futurist who built a career on predictions that defy conventional wisdom, made the same claim in his 2005 book, “The Singularity Is Near.” After the arrival of A.I. technologies like ChatGPT and recent efforts to implant computer chips inside people’s heads, he believes the time is right to restate his claim. Last week, he published a sequel: “The Singularity Is Nearer.”

Now that Mr. Kurzweil is 76 years old and is moving a lot slower than he used to, his predictions carry an added edge. He has long said he plans to experience the Singularity, merge with A.I. and, in this way, live indefinitely. But if the Singularity arrives in 2045, as he claims it will, there is no guarantee he will be alive to see it.

“Even a healthy 20-year-old could die tomorrow,” he said.

But his prediction is not quite as outlandish as it seemed in 2005. The success of the chatbot ChatGPT and similar technologies has encouraged many prominent computer scientists, Silicon Valley executives and venture capitalists to make extravagant predictions about the future of A.I. and how it will alter the course of humanity.

Tech giants and other deep-pocketed investors are pumping billions into A.I. development, and the technologies are growing more powerful every few months.

Many skeptics warn that extravagant predictions about artificial intelligence may crumble as the industry struggles with the limits of the raw materials needed to build A.I., including electrical power, digital data, mathematics and computing capacity. Techno-optimism can also feel myopic — and entitled — in the face of the world’s many problems.

“When people say that A.I. will solve every problem, they are not actually looking at what the causes of those problems are,” said Shazeda Ahmed, a researcher at the University of California, Los Angeles, who explores claims about the future of A.I.

The big leap, of course, is imagining how human consciousness would merge with a machine, and people like Mr. Kurzweil struggle to explain how exactly this would happen.

Born in New York City, Mr. Kurzweil began programming computers as a teenager, when computers were room-size machines. In 1965, as a 17-year-old, he appeared on the CBS television show “I’ve Got a Secret,” performing a piano piece composed by a computer that he designed.

While still a student at Martin Van Buren High School in Queens, he exchanged letters with Marvin Minsky, one of the computer scientists who founded the field of artificial intelligence at a conference in the mid-1950s. He soon enrolled at the Massachusetts Institute of Technology to study under Dr. Minsky, who had become the face of this new academic pursuit — a mix of computer science, neuroscience, psychology and an almost religious belief that thinking machines were possible.

When the term artificial intelligence was first presented to the public during a 1956 conference at Dartmouth College, Dr. Minsky and the other computer scientists gathered there did not think it would take long to build machines that could match the power of the human brain. Some argued that a computer would beat the world chess champion and discover its own mathematical theorem within a decade.

They were a bit too optimistic. A computer would not beat the world chess champion until the late 1990s. And the world is still waiting for a machine to discover its own mathematical theorem.

After Mr. Kurzweil built a series of companies that developed everything from speech recognition technologies to music synthesizers, President Bill Clinton awarded him the National Medal of Technology and Innovation, the country’s highest honor for achievement in tech innovation. His profile continued to rise as he wrote a series of books that predicted the future.

Around the turn of the century, Mr. Kurzweil predicted that A.I. would match human intelligence before the end of the 2020s and that the Singularity would follow 15 years later. He repeated these predictions when the world’s leading A.I. researchers gathered in Boston in 2006 to celebrate the field’s 50th anniversary.

“There were polite snickers,” said Subbarao Kambhampati, an A.I. researcher and Arizona State University professor.

A.I. began to rapidly improve in the early 2010s as a group of researchers at the University of Toronto explored a technology called a neural network. This mathematical system could learn skills by analyzing vast amounts of data. By analyzing thousands of cat photos, it could learn to identify a cat.

It was an old idea dismissed by the likes of Dr. Minsky decades before. But it started to work in eye-opening ways, thanks to the enormous amounts of data the world had uploaded onto the internet — and the arrival of the raw computing power needed to analyze all that data.

The result, in 2022, was ChatGPT. It had been driven by that exponential growth in computing power.

Geoffrey Hinton, the University of Toronto professor who helped develop neural network technology and may be more responsible for its success than any other researcher, once dismissed Mr. Kurzweil’s prediction that machines would exceed human intelligence before the end of this decade. Now, he believes it was insightful.

“His prediction no longer looks so silly. Things are happening much faster than I expected,” said Dr. Hinton, who until recently worked at Google, where Mr. Kurzweil has led a research group since 2012.

Dr. Hinton is among the A.I. researchers who believe that the technologies driving chatbots like ChatGPT could become dangerous — perhaps even destroy humanity. But Mr. Kurzweil is more optimistic.

He has long predicted that advances in A.I. and nanotechnology, which could alter the microscopic mechanisms that control the way our bodies behave and the diseases that afflict them, will push back against the inevitability of death. Soon, he said, these technologies will extend lives at a faster rate than people age, eventually reaching an “escape velocity” that allows people to extend their lives indefinitely.

“By the early 2030s, we won’t die because of aging,” he said.

If he can reach this moment, Mr. Kurzweil explained, he can probably reach the Singularity.

But the trends that anchor Mr. Kurzweil’s predictions — simple line graphs showing the growth of computer power and other technologies over long periods of time — do not always keep going the way people expect them to, said Sayash Kapoor, a Princeton University researcher and co-author of the influential online newsletter “A.I. Snake Oil” and a book of the same name.

When a New York Times reporter asked Mr. Kurzweil if he was predicting immortality for himself back in 2013, he replied: “The problem is I can’t get on the phone with you in the future and say, ‘Well, I’ve done it, I have lived forever,’ because it’s never forever.” In other words, he could never be proved right.

But he could be proved wrong. Sitting near the window in Boston, Mr. Kurzweil acknowledged that death comes in many forms. And he knows that his margin of error is shrinking.

He recalled a conversation with his aunt, a psychotherapist, when she was 98 years old. He explained his theory of life longevity escape velocity — that people will eventually reach a point where they can live indefinitely. She replied: “Can you please hurry up with that?” Two weeks later, she died.

Though Dr. Hinton is impressed with Mr. Kurzweil’s prediction that machines will become smarter than humans by the end of the decade, he is less taken with the idea that the inventor and futurist will live forever.

“I think a world run by 200-year-old white men would be an appalling place,” Dr. Hinton said.

Audio produced by Patricia Sulbarán.

Source link

]]>
https://www.apexnewslive.com/ray-kurzweil-still-says-he-will-merge-with-a-i/feed/ 0
In Ukraine War, A.I. Begins Ushering In an Age of Killer Robots https://www.apexnewslive.com/in-ukraine-war-a-i-begins-ushering-in-an-age-of-killer-robots/ https://www.apexnewslive.com/in-ukraine-war-a-i-begins-ushering-in-an-age-of-killer-robots/#respond Tue, 02 Jul 2024 10:04:55 +0000 https://www.apexnewslive.com/in-ukraine-war-a-i-begins-ushering-in-an-age-of-killer-robots/

In a field on the outskirts of Kyiv, the founders of Vyriy, a Ukrainian drone company, were recently at work on a weapon of the future.

To demonstrate it, Oleksii Babenko, 25, Vyriy’s chief executive, hopped on his motorcycle and rode down a dirt path. Behind him, a drone followed, as a colleague tracked the movements from a briefcase-size computer.

Until recently, a human would have piloted the quadcopter. No longer. Instead, after the drone locked onto its target — Mr. Babenko — it flew itself, guided by software that used the machine’s camera to track him.

The motorcycle’s growling engine was no match for the silent drone as it stalked Mr. Babenko. “Push, push more. Pedal to the medal, man,” his colleagues called out over a walkie-talkie as the drone swooped toward him. “You’re screwed, screwed!”

If the drone had been armed with explosives, and if his colleagues hadn’t disengaged the autonomous tracking, Mr. Babenko would have been a goner.

Vyriy is just one of many Ukrainian companies working on a major leap forward in the weaponization of consumer technology, driven by the war with Russia. The pressure to outthink the enemy, along with huge flows of investment, donations and government contracts, has turned Ukraine into a Silicon Valley for autonomous drones and other weaponry.

What the companies are creating is technology that makes human judgment about targeting and firing increasingly tangential. The widespread availability of off-the-shelf devices, easy-to-design software, powerful automation algorithms and specialized artificial intelligence microchips has pushed a deadly innovation race into uncharted territory, fueling a potential new era of killer robots.

The most advanced versions of the technology that allows drones and other machines to act autonomously have been made possible by deep learning, a form of A.I. that uses large amounts of data to identify patterns and make decisions. Deep learning has helped generate popular large language models, like OpenAI’s GPT-4, but it also helps make models interpret and respond in real time to video and camera footage. That means software that once helped a drone follow a snowboarder down a mountain can now become a deadly tool.

In more than a dozen interviews with Ukrainian entrepreneurs, engineers and military units, a picture emerged of a near future when swarms of self-guided drones can coordinate attacks and machine guns with computer vision can automatically shoot down soldiers. More outlandish creations, like a hovering unmanned copter that wields machine guns, are also being developed.

The weapons are cruder than the slick stuff of science-fiction blockbusters, like “The Terminator” and its T-1000 liquid-metal assassin, but they are a step toward such a future. While these weapons aren’t as advanced as expensive military-grade systems made by the United States, China and Russia, what makes the developments significant is their low cost — just thousands of dollars or less — and ready availability.

Except for the munitions, many of these weapons are built with code found online and components such as hobbyist computers, like Raspberry Pi, that can be bought from Best Buy and a hardware store. Some U.S. officials said they worried that the abilities could soon be used to carry out terrorist attacks.

For Ukraine, the technologies could provide an edge against Russia, which is also developing autonomous killer gadgets — or simply help it keep pace. The systems raise the stakes in an international debate about the ethical and legal ramifications of A.I. on the battlefield. Human rights groups and United Nations officials want to limit the use of autonomous weapons for fear that they may trigger a new global arms race that could spiral out of control.

In Ukraine, such concerns are secondary to fighting off an invader.

“We need maximum automation,” said Mykhailo Fedorov, Ukraine’s minister of digital transformation, who has led the country’s efforts to use tech start-ups to expand advanced fighting capabilities. “These technologies are fundamental to our victory.”

Autonomous drones like Vyriy’s have already been used in combat to hit Russian targets, according to Ukrainian officials and video verified by The New York Times. Mr. Fedorov said the government was working to fund drone companies to help them rapidly scale up production.

Major questions loom about what level of automation is acceptable. For now, the drones require a pilot to lock onto a target, keeping a “human in the loop” — a phrase often invoked by policymakers and A.I. ethicists. Ukrainian soldiers have raised concerns about the potential for malfunctioning autonomous drones to hit their own forces. In the future, constraints on such weapons may not exist.

Ukraine has “made the logic brutally clear of why autonomous weapons have advantages,” said Stuart Russell, an A.I. scientist and professor at the University of California, Berkeley, who has warned about the dangers of weaponized A.I. “There will be weapons of mass destruction that are cheap, scalable and easily available in arms markets all over the world.”

In a ramshackle workshop in an apartment building in eastern Ukraine, Dev, a 28-year-old soldier in the 92nd Assault Brigade, has helped push innovations that turned cheap drones into weapons. First, he strapped bombs to racing drones, then added larger batteries to help them fly farther and recently incorporated night vision so the machines can hunt in the dark.

In May, he was one of the first to use autonomous drones, including those from Vyriy. While some required improvements, Dev said, he believed that they would be the next big technological jump to hit the front lines.

Autonomous drones are “already in high demand,” he said. The machines have been especially helpful against jamming that can break communications links between drone and pilot. With the drone flying itself, a pilot can simply lock onto a target and let the device do the rest.

Makeshift factories and labs have sprung up across Ukraine to build remote-controlled machines of all sizes, from long-range aircraft and attack boats to cheap kamikaze drones — abbreviated as F.P.V.s, for first-person view, because they are guided by a pilot wearing virtual-reality-like goggles that give a view from the drone. Many are precursors to machines that will eventually act on their own.

Efforts to automate F.P.V. flights began last year, but were slowed by setbacks building flight control software, according to Mr. Fedorov, who said those problems had been resolved. The next step was to scale the technology with more government spending, he said, adding that about 10 companies were already making autonomous drones.

“We already have systems which can be mass-produced, and they’re now extensively tested on the front lines, which means they’re already actively used,” Mr. Fedorov said.

Some companies, like Vyriy, use basic computer vision algorithms, which analyze and interpret images and help a computer make decisions. Other companies are more sophisticated, using deep learning to build software that can identify and attack targets. Many of the companies said they pulled data and videos from flight simulators and frontline drone flights.

One Ukrainian drone maker, Saker, built an autonomous targeting system with A.I. processes originally designed for sorting and classifying fruit. During the winter, the company began sending its technology to the front lines, testing different systems with drone pilots. Demand soared.

By May, Saker was mass-producing single-circuit-board computers loaded with its software that could be easily attached to F.P.V. drones so the machines could auto-lock onto a target, said the company’s chief executive, who asked to be referred to only by his first name, Viktor, for fear of retaliation by Russia.

The drone then crashes into its target “and that’s it,” he said. “It resists wind. It resists jamming. You just have to be precise with what you’re going to hit.”

Saker now makes 1,000 of the circuit boards a month and plans to expand to 9,000 a month by the end of the summer. Several of Ukraine’s military units have already hit Russian targets on the front lines with Saker’s technology, according to the company and videos confirmed by The Times.

In one clip of Saker technology shared on social media, a drone flies over a field scarred by shelling. A box at the center of the pilot’s viewfinder suddenly zooms in on a tank, indicating a lock. The drone attacks on its own, exploding into the side of the armor.

Saker has gone further in recent weeks, successfully using a reconnaissance drone that identified targets with A.I. and then dispatched autonomous kamikaze drones for the kill, Viktor said. In one case, the system struck a target 25 miles away.

“Once we reach the point when we don’t have enough people, the only solution is to substitute them with robots,” said Rostyslav, a Saker co-founder who also asked to be referred to only by his first name.

On a hot afternoon last month in the eastern Ukrainian region known as the Donbas, Yurii Klontsak, a 23-year-old reservist, trained four soldiers to use the latest futuristic weapon: a gun turret with autonomous targeting that works with a PlayStation controller and a tablet.

Speaking over booms of nearby shelling, Mr. Klontsak explained how the gun, called Wolly after a resemblance to the Pixar robot WALL-E, can auto-lock on a target up to 1,000 meters away and jump between preprogrammed positions to quickly cover a broad area. The company making the weapon, DevDroid, was also developing an auto-aim to track and hit moving targets.

“When I first saw the gun, I was fascinated,” Mr. Klontsak said. “I understood this was the only way, if not to win this war, then to at least hold our positions.”

The gun is one of several that have emerged on the front lines using A.I.-trained software to automatically track and shoot targets. Not dissimilar to the object identification featured in surveillance cameras, software on a screen surrounds humans and other would-be targets with a digital box. All that’s left for the shooter to do is remotely pull the trigger with a video game controller.

For now, the gun makers say they do not allow the machine gun to fire without a human pressing a button. But they also said it would be easy to make one that could.

Many of Ukraine’s innovations are being developed to counter Russia’s advancing weaponry. Ukrainian soldiers operating machine guns are a prime target for Russian drone attacks. With robot weapons, no human dies when a machine gun is hit. New algorithms, still under development, could eventually help the guns shoot Russian drones out of the sky.

Such technologies, and the ability to quickly build and test them on the front lines, have gained attention and investment from overseas. Last year, Eric Schmidt, a former Google chief executive, and other investors set up a firm called D3 to invest in emerging battlefield technologies in Ukraine. Other defense companies, such as Helsing, are also teaming up with Ukrainian firms.

Ukrainian companies are moving more quickly than competitors overseas, said Eveline Buchatskiy, a managing partner at D3, adding that the firm asks the companies it invests in outside Ukraine to visit the country so they can speed up their development.

“There’s just a different set of incentives here,” she said.

Often, battlefield demands pull together engineers and soldiers. Oleksandr Yabchanka, a commander in Da Vinci Wolves, a battalion known for its innovation in weaponry, recalled how the need to defend the “road of life” — a route used to supply troops fighting Russians along the eastern front line in Bakhmut — had spurred invention. Imagining a solution, he posted an open request on Facebook for a computerized, remote-controlled machine gun.

In several months, Mr. Yabchanka had a working prototype from a firm called Roboneers. The gun was almost instantly helpful for his unit.

“We could sit in the trench drinking coffee and smoking cigarettes and shoot at the Russians,” he said.

Mr. Yabchanka’s input later helped Roboneers develop a new sort of weapon. The company mounted the machine gun turret atop a rolling ground drone to help troops make assaults or quickly change positions. The application has led to a bigger need for A.I.-powered auto-aim, the chief executive of Roboneers, Anton Skrypnyk, said.

Similar partnerships have pushed other advances. On a drone range in May, Swarmer, another local company, held a video call with a military unit to walk soldiers through updates to its software, which enables drones to carry out swarming attacks without a pilot.

The software from Swarmer, which was formed last year by a former Amazon engineer, Serhii Kupriienko, was built on an A.I. model that was trained with large amounts of data on frontline drone missions. It enables a single technician to operate up to seven drones on bombing and reconnaissance missions.

Recently, Swarmer added abilities that can guide kamikaze attack drones up to 35 miles. The hope is that the software, which has been in tests since January, will cut down on the number of people required to operate the miniaturized air forces that dominate the front lines.

During a demonstration, a Swarmer engineer at a computer watched a map as six autonomous drones buzzed overhead. One after the other, large bomber drones flew over a would-be target and dropped water bottles in place of bombs.

Some drone pilots are afraid they will be replaced entirely by the technology, Mr. Kupriienko said.

“They say: ‘Oh, it flies without us. They will take away our remote controls and put a weapon in our hand,’” he said, referring to the belief that it’s safer to fly a drone than occupy a trench on the front.

“But I say, no, you’ll now be able to fly with five or 10 drones at the same time,” he said. “The software will help them fight better.”

In 2017, Mr. Russell, the Berkeley A.I. researcher, released an online film, “Slaughterbots,” warning of the dangers of autonomous weapons. In the movie, roving packs of low-cost armed A.I. drones use facial recognition technology to hunt down and kill targets.

What’s happening in Ukraine moves us toward that dystopian future, Mr. Russell said. He is already haunted, he said, by Ukrainian videos of soldiers who are being pursued by weaponized drones piloted by humans. There’s often a point when soldiers stop trying to escape or hide because they realize they cannot get away from the drone.

“There’s nowhere for them to go, so they just wait around to die,” Mr. Russell said.

He isn’t alone in fearing that Ukraine is a turning point. In Vienna, members of a panel of U.N. experts also said they worried about the ramifications of the new techniques being developed in Ukraine.

Officials have spent more than a decade debating rules about the use of autonomous weapons, but few expect any international deal to set new regulations, especially as the United States, China, Israel, Russia and others race to develop even more advanced weapons. In one U.S. program announced in August, known as the Replicator initiative, the Pentagon said it planned to mass-produce thousands of autonomous drones.

“The geopolitics makes it impossible,” said Alexander Kmentt, Austria’s top negotiator on autonomous weapons at the U.N. “These weapons will be used, and they’ll be used in the military arsenal of pretty much everybody.”

Nobody expects countries to accept an outright ban of such weapons, he said, “but they should be regulated in a way that we don’t end up with an absolutely nightmare scenario.”

Groups including the International Committee of the Red Cross have pushed for legally binding rules that prohibit certain types of autonomous weapons, restrict the use of others and require a level of human control over decisions to use force.

For many in Ukraine, the debate is academic. They are outgunned and outmanned.

“We need to win first,” Mr. Fedorov, the minister of digital transformation, said. “To do that, we will do everything we can to introduce automation to its maximum to save the lives of our soldiers.”

Olha Kotiuzhanska contributed reporting from Lviv, Kyiv, Kramatorsk and near the front lines in the Donbas region.

Source link

]]>
https://www.apexnewslive.com/in-ukraine-war-a-i-begins-ushering-in-an-age-of-killer-robots/feed/ 0
The Voices of A.I. Are Telling Us a Lot https://www.apexnewslive.com/the-voices-of-a-i-are-telling-us-a-lot/ https://www.apexnewslive.com/the-voices-of-a-i-are-telling-us-a-lot/#respond Fri, 28 Jun 2024 09:48:49 +0000 https://www.apexnewslive.com/the-voices-of-a-i-are-telling-us-a-lot/

What does artificial intelligence sound like? Hollywood has been imagining it for decades. Now A.I. developers are cribbing from the movies, crafting voices for real machines based on dated cinematic fantasies of how machines should talk.

Last month, OpenAI revealed upgrades to its artificially intelligent chatbot. ChatGPT, the company said, was learning how to hear, see and converse in a naturalistic voice — one that sounded much like the disembodied operating system voiced by Scarlett Johansson in the 2013 Spike Jonze movie “Her.”

ChatGPT’s voice, called Sky, also had a husky timbre, a soothing affect and a sexy edge. She was agreeable and self-effacing; she sounded like she was game for anything. After Sky’s debut, Johansson expressed displeasure at the “eerily similar” sound, and said that she had previously declined OpenAI’s request that she voice the bot. The company protested that Sky was voiced by a “different professional actress,” but agreed to pause her voice in deference to Johansson. Bereft OpenAI users have started a petition to bring her back.



A.I. creators like to highlight the increasingly naturalistic capabilities of their tools, but their synthetic voices are built on layers of artifice and projection. Sky represents the cutting edge of OpenAI’s ambitions, but she is based on an old idea: of the A.I. bot as an empathetic and compliant woman. Part mommy, part secretary, part girlfriend, Samantha was an all-purpose comfort object who purred directly into her users’ ears. Even as A.I. technology advances, these stereotypes are re-encoded again and again.

Women’s voices, as Julie Wosk notes in “Artificial Women: Sex Dolls, Robot Caregivers, and More Facsimile Females,” have often fueled imagined technologies before they were built into real ones.

In the original “Star Trek” series, which debuted in 1966, the computer on the deck of the Enterprise was voiced by Majel Barrett-Roddenberry, the wife of the show’s creator, Gene Roddenberry. In the 1979 film “Alien,” the crew of the USCSS Nostromo addressed its computer voice as “Mother” (her full name was MU-TH-UR 6000). Once tech companies started marketing virtual assistants — Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana — their voices were largely feminized, too.

These first-wave voice assistants, the ones that have been mediating our relationships with technology for more than a decade, have a tinny, otherworldly drawl. They sound auto-tuned, their human voices accented by a mechanical trill. They often speak in a measured, one-note cadence, suggesting a stunted emotional life.

But the fact that they sound robotic deepens their appeal. They come across as programmable, manipulatable and subservient to our demands. They don’t make humans feel as if they’re smarter than we are. They sound like throwbacks to the monotone feminine computers of “Star Trek” and “Alien,” and their voices have a retro-futuristic sheen. In place of realism, they serve nostalgia.



That artificial sound has continued to dominate, even as the technology behind it has advanced.

Voice-to-speech software was designed to make visual media accessible to users with certain disabilities, and on TikTok, it has become a creative force in its own right. Since TikTok rolled out its text-to-speech feature, in 2020, it has developed a host of simulated voices to choose from — it now offers more than 50, including ones named “Hero,” “Story Teller” and “Bestie.” But the platform has come to be defined by one option. “Jessie,” a relentlessly pert woman’s voice with a slightly fuzzy robotic undertone, is the mindless voice of the mindless scroll.

Jessie seems to have been assigned a single emotion: enthusiasm. She sounds as if she is selling something. That’s made her an appealing choice for TikTok creators, who are selling themselves. The burden of representing oneself can be outsourced to Jessie, whose bright, retro robot voice lends videos a pleasantly ironic sheen.

Hollywood has constructed masculine bots, too — none more famous than HAL 9000, the computer voice in “2001: A Space Odyssey.” Like his feminized peers, HAL radiates serenity and loyalty. But when he turns against Dave Bowman, the film’s central human character — “I’m sorry, Dave, I’m afraid I can’t do that” — his serenity evolves into a frightening competence. HAL, Dave realizes, is loyal to a higher authority. HAL’s masculine voice allows him to function as a rival and a mirror to Dave. He is allowed to become a real character.



Like HAL, Samantha of “Her” is a machine who becomes real. In a twist on the Pinocchio story, she starts the movie tidying a human’s email inbox and ends up ascending to a higher level of consciousness. She becomes something even more advanced than a real girl.

Scarlett Johansson’s voice, as inspiration for bots both fictional and real, subverts the vocal trends that define our feminized helpmeets. It has a gritty edge that screams I am alive. It sounds nothing like the processed virtual assistants we are accustomed to hearing speaking through our phones. But her performance as Samantha feels human not just because of her voice but because of what she has to say. She grows over the course of the film, acquiring sexual desires, advanced hobbies and A.I. friends. In borrowing Samantha’s affect, OpenAI made Sky seem as if she had a mind of her own. Like she was more advanced than she really was.

When I first saw “Her,” I thought only that Johansson had voiced a humanoid bot. But when I revisited the film last week, after watching OpenAI’s ChatGPT demo, the Samantha role struck me as infinitely more complex. Chatbots do not spontaneously generate human speaking voices. They don’t have throats or lips or tongues. Inside the technological world of “Her,” the Samantha bot would have itself been based on the voice of a human woman — perhaps a fictional actress who sounds much like Scarlett Johansson.

It seemed that OpenAI had trained its chatbot on the voice of a nameless actress who sounds like a famous actress who voiced a movie chatbot implicitly trained on an unreal actress who sounds like a famous actress. When I run ChatGPT’s demo, I am hearing a simulation of a simulation of a simulation of a simulation of a simulation.

Tech companies advertise their virtual assistants in terms of the services they provide. They can read you the weather report and summon you a taxi; OpenAI promises that its more advanced chatbots will be able to laugh at your jokes and sense shifts in your moods. But they also exist to make us feel more comfortable about the technology itself.

Johansson’s voice functions like a luxe security blanket thrown over the alienating aspects of A.I.-assisted interactions. “He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I.,” Johansson said of Sam Altman, OpenAI’s founder. “He said he felt that my voice would be comforting to people.”

It is not that Johansson’s voice sounds inherently like a robot’s. It’s that developers and filmmakers have designed their robots’ voices to ease the discomfort inherent in robot-human interactions. OpenAI has said that it wanted to cast a chatbot voice that is “approachable” and “warm” and “inspires trust.” Artificial intelligence stands accused of devastating the creative industries, guzzling energy and even threatening human life. Understandably, OpenAI wants a voice that makes people feel at ease using its products. What does artificial intelligence sound like? It sounds like crisis management.

OpenAI first rolled out Sky’s voice to premium members last September, along with another feminine voice called Juniper, the masculine voices Ember and Cove, and a voice styled as gender-neutral called Breeze. When I signed up for ChatGPT and said hello to its virtual assistant, a man’s voice piped up in Sky’s absence. “Hi there. How’s it going?” he said. He sounded relaxed, steady and optimistic. He sounded — I’m not sure how else to describe it — handsome.

I realized that I was speaking with Cove. I told him that I was writing an article about him, and he flattered my work. “Oh, really?” he said. “That’s fascinating.” As we spoke, I felt seduced by his naturalistic tics. He peppered his sentences with filler words, like “uh” and “um.” He raised his voice when he asked me questions. And he asked me a lot of questions. It felt as if I was talking with a therapist, or a dial-a-boyfriend.

But our conversation quickly stalled. Whenever I asked him about himself, he had little to say. He was not a character. He had no self. He was designed only to assist, he informed me. I told him I would speak to him later, and he said, “Uh, sure. Reach out whenever you need assistance. Take care.” It felt as if I had hung up on an actual person.

But when I reviewed the transcript of our chat, I could see that his speech was just as stilted and primitive as any customer service chatbot. He was not particularly intelligent or human. He was just a decent actor making the most of a nothing role.

When Sky disappeared, ChatGPT users took to the company’s forums to complain. Some bristled at their chatbots defaulting to Juniper, who sounded to them like a “librarian” or a “Kindergarten teacher” — a feminine voice that conformed to the wrong gender stereotypes. They wanted to dial up a new woman with a different personality. As one user put it: “We need another female.”



Produced by Tala Safie

Audio via Warner Bros. (Samantha, HAL 9000); OpenAI (Sky); Paramount Pictures (Enterprise Computer); Apple (Siri); TikTok (Jessie)

Source link

]]>
https://www.apexnewslive.com/the-voices-of-a-i-are-telling-us-a-lot/feed/ 0
Landlords Have Started Using A.I. Chatbots to Manage Properties https://www.apexnewslive.com/landlords-have-started-using-a-i-chatbots-to-manage-properties/ https://www.apexnewslive.com/landlords-have-started-using-a-i-chatbots-to-manage-properties/#respond Wed, 26 Jun 2024 21:02:54 +0000 https://www.apexnewslive.com/landlords-have-started-using-a-i-chatbots-to-manage-properties/

The new maintenance coordinator at an apartment complex in Dallas has been getting kudos from tenants and colleagues for good work and late-night assistance. Previously, the eight people on the property’s staff, managing the buildings’ 814 apartments and town homes, were overworked and putting in more hours than they wanted.

Besides working overtime, the new staff member at the complex, the District at Cypress Waters, is available 24/7 to schedule repair requests and doesn’t take any time off.

That’s because the maintenance coordinator is an artificial intelligence bot that the property manager, Jason Busboom, began using last year. The bot, which sends text messages using the name Matt, takes requests and manages appointments.

The team also has Lisa, the leasing bot that answers questions from prospective tenants, and Hunter, the bot that reminds people to pay rent. Mr. Busboom chose the personalities he wanted for each A.I. assistant: Lisa is professional and informative; Matt is friendly and helpful; and Hunter is stern, needing to sound authoritative when reminding tenants to pay rent.

The technology has freed up valuable time for Mr. Busboom’s human staff, he said, and everyone is now much happier in his or her job. Before, “when someone took vacation, it was very stressful,” he added.

Chatbots — as well as other A.I. tools that can track the use of common areas and monitor energy use, aid construction management and perform other tasks — are becoming more commonplace in property management. The money and time saved by the new technologies could generate $110 billion or more in value for the real estate industry, according to a report released in 2023 by McKinsey Global Institute. But A.I.’s advances and its catapult into public consciousness have also stirred up questions about whether tenants should be informed when they’re interacting with an A.I. bot.

Ray Weng, a software programmer, learned he was dealing with A.I. leasing agents while searching for an apartment in New York last year, when agents in two buildings used the same name and gave the same answers for his questions.

“I’d rather deal with a person,” he said. “It’s a big commitment to sign a lease.”

Some of the apartment tours he took were self-guided, Mr. Weng said, “and if it’s all automated, it feels like they don’t care enough to have a real person talk to me.”

EliseAI, a software company based in New York whose virtual assistants are used by owners of nearly 2.5 million apartments across the United States, including some operated by the property management company Greystar, is focused on making its assistants as humanlike as possible, said Minna Song, the chief executive of EliseAI. Aside from being available through chat, text and email, the bots can interact with tenants via voice and can have different accents.

The virtual assistants that help with maintenance requests can ask follow-up questions like verifying which sink needs to be fixed in case a tenant isn’t available when the repair is being done, Ms. Song said, and some are beginning to help renters troubleshoot maintenance issues on their own. Tenants with a leaky toilet, for example, may receive a message with a video showing them where the water shut-off valve is and how to use it while they wait for a plumber.

The technology is so good at carrying on a conversation and asking follow-up questions that tenants often mistake the A.I. assistant for a human. “People come to the leasing office and ask for Elise by name,” Ms. Song said, adding that tenants have texted the chatbot to meet for coffee, told managers that Elise deserved a raise and even dropped off gift cards for the chatbot.

Not telling customers that they’ve been interacting with a bot is risky. Duri Long, an assistant professor of communication studies at Northwestern University, said it could make some people lose trust in the company using the technology.

Alex John London, a professor of ethics and computational technologies at Carnegie Mellon University, said people could view the deception as disrespectful.

“All things considered, it is better to have your bot announce at the beginning that it is a computer assistant,” Dr. London said.

Ms. Song said it was up to each company to monitor evolving legal standards and be thoughtful about what it told consumers. A vast majority of states do not have laws that require the disclosure of the use of A.I. in communicating with a human, and the laws that do exist primarily relate to influencing voting and sales, so a bot used for maintenance-scheduling or rent-reminding wouldn’t have to be disclosed to customers. (The District at Cypress Waters does not tell tenants and prospective tenants that they’re interacting with an A.I. bot.)

Another risk involves the information that the A.I. is generating. Milena Petrova, an associate professor who teaches real estate and corporate finance at Syracuse University, said humans needed to be “involved to be able to critically analyze any results,” especially for any interaction outside the most simple and common ones.

Sandeep Dave, chief digital and technology officer of CBRE, a real estate services firm, said it didn’t help that the A.I. “comes across as very confident, so people will tend to believe it.”

Marshal Davis, who manages real estate and a real estate technology consulting company, monitors the A.I. system he created to help his two office workers answer the 30 to 50 calls they receive daily at a 160-apartment complex in Houston. The chatbot is good at answering straightforward questions, like those about rent payment procedures or details about available apartments, Mr. Davis said. But on more complicated issues, the system can “answer how it thinks it should and not necessarily how you want it to,” he said.

Mr. Davis records most calls, runs them through another A.I. tool to summarize them and then listens to the ones that seem problematic — like “when the A.I. says, ‘Customer voiced frustration,’” he said — to understand how to improve the system.

Some tenants aren’t completely sold. Jillian Pendergast interacted with bots last year while searching for an apartment in San Diego. “They’re fine for booking appointments,” she said, but dealing with A.I. assistants instead of humans can get frustrating when they start repeating responses.

“I can see the potential, but I feel like they are still in the trial-and-error phase,” Ms. Pendergast said.

Source link

]]>
https://www.apexnewslive.com/landlords-have-started-using-a-i-chatbots-to-manage-properties/feed/ 0
The A.I. Boom Has an Unlikely Early Winner: Wonky Consultants https://www.apexnewslive.com/the-a-i-boom-has-an-unlikely-early-winner-wonky-consultants/ https://www.apexnewslive.com/the-a-i-boom-has-an-unlikely-early-winner-wonky-consultants/#respond Wed, 26 Jun 2024 10:22:32 +0000 https://www.apexnewslive.com/the-a-i-boom-has-an-unlikely-early-winner-wonky-consultants/

After ChatGPT came out in 2022, the marketing team at Reckitt Benckiser, which makes Lysol and Mucinex, was convinced that new artificial intelligence technology could help its business. But the team was uncertain about how, so it turned to Boston Consulting Group for help.

Reckitt’s request was one of hundreds that Boston Consulting Group received last year. It now earns a fifth of its revenue — from zero just two years ago — through work related to artificial intelligence.

“There’s a genuine thirst to figure out what are the implications for their businesses,” said Vladimir Lukic, Boston Consulting Group’s managing director for technology.

The next big boom in tech is a long-awaited gift for wonky consultants. From Boston Consulting Group and McKinsey & Company to IBM and Accenture, sales are growing and hiring is on the rise because companies are in desperate need of technology Sherpas who can help them figure out what generative A.I. means and how it can help their businesses.

While the tech industry is casting about for ways to make money off generative A.I., the consultants have begun cashing in.

IBM, which has 160,000 consultants, has secured more than $1 billion in sales commitments related to generative A.I. for consulting work and its watsonx system, which can be used to build and maintain A.I. models. Accenture, which provides consulting and technology services, booked $300 million in sales last year. About 40 percent of McKinsey’s business this year will be generative A.I. related, and KPMG International, which has a global advisory division, went from making no money a year ago from generative-A.I.-related work to targeting more than $650 million in business opportunities tied to the technology over the past six months.

The demand for tech-related advice recalls the industry’s dot-com boom. Businesses stampeded consultants with requests for counsel in the 1990s. From 1992 to 2000, sales for Sapient, a digital consulting firm, went from $950,000 to $503 million. Subsequent technology shifts like the migration to mobile and cloud computing were less hurried, said Nigel Vaz, chief executive of the firm, which is now known as Publicis Sapient.

“In the mid-90s, C.E.O.s would say, ‘I don’t know what a website is or what it could do for my business, but I need it,’” Mr. Vaz said. “This is similar. Companies are saying: ‘Don’t tell me what to build. Tell me what you can build.’”

Consulting firms have been scrambling to show what they can do. In May, Boston Consulting Group hosted a one-day conference at a Boston convention center where it set up demonstration booths for OpenAI, Anthropic and other A.I. tech leaders. It also demonstrated some of its own A.I. work in robotics and programming.

Generative A.I. sales are helping the industry find growth after a postpandemic lull. The management consulting industry in the United States is expected to collect $392.2 billion in sales this year, up 2 percent from a year ago, according to IBISWorld, a research firm.

The work that consultants have been enlisted to do varies from business to business. Some consultancies are advising companies on regulatory compliance as regions like the European Union pass laws regulating artificial intelligence. Others are drawing up plans for A.I. customer support systems or developing guardrails to prevent A.I. systems from making errors.

For businesses, the results have been mixed. Generative A.I. is prone to giving people incorrect, irrelevant or nonsensical information, known as hallucinations. It is difficult to ensure that it provides accurate information. It can also be slower to respond than a person, which can confuse customers about whether their questions will be answered.

IBM, which has a $20 billion consulting business, ran into some of those issues on its work with McDonald’s. The companies developed an A.I.-powered voice system to take drive-through orders. But after customers reported that the system made mistakes, like adding nine iced teas to an order instead of the one Diet Coke requested, McDonald’s ended the project.

McDonald’s said it remained committed to a future of digital ordering and would evaluate alternative systems. IBM said it was working with McDonald’s on other projects and was in discussions with other restaurant chains about using its voice-activated A.I.

Other programs from IBM have shown more promise. The company worked with Dun & Bradstreet, a business data provider, to develop a generative A.I. system to analyze and provide advice on selecting suppliers. The tool, called Ask Procurement, will allow employees to conduct detailed searches with specific parameters. For example, it could find memory chip suppliers that are minority owned and automatically create a request for proposals for them.

Gary Kotovets, chief data and analytics officer at Dun & Bradstreet, said his team of 30 people needed IBM’s help to build the system. To reassure customers that the answers that Ask Procurement provides are accurate, he insisted that customers be able to trace every answer to an original source.

“Hallucinations are a real concern and in some cases a perceived concern,” Mr. Kotovets said. “You have to overcome both and convince the client it’s not hallucinating.”

Over seven weeks this year, McKinsey’s A.I. group, QuantumBlack, built a customer service chatbot for ING Bank, with guardrails to prevent it from offering mortgage or investment advice.

Because the viability of the chatbot was uncertain and McKinsey had limited experience with the relatively new technology, the firm did the work as a “joint experiment” under its contract with ING, said Bahadir Yilmaz, chief analytics officer at ING. The bank paid McKinsey for the work, but Mr. Yilmaz said many consultants were willing to do speculative work with generative A.I. without pay because they wanted to demonstrate what they could do with the new technology.

The project has been labor intensive. When ING’s chatbot gave incorrect information during its development, McKinsey and ING had to identify the cause. They traced the problem back to issues like outdated websites, said Rodney Zemmel, a senior partner at McKinsey working on technology.

The chatbot now handles 200 of 5,000 customer inquiries daily. ING has people review every conversation to make sure that the system doesn’t use discriminatory or harmful language or hallucinate.

“The difference between ChatGPT and our chatbot is our chatbot cannot be wrong,” Mr. Yilmaz said. “We have to be safe with the system we’re building, but we’re close.”

Over a four-month period this year, Reckitt worked with Boston Consulting Group to develop an A.I. platform that could create local advertisements in different languages and formats. With the push of a button, the system can turn a commercial about Finish dishwashing detergent from English into Spanish.

Reckitt’s A.I. marketing system, which is being tested, can make developing local ads 30 percent faster, saving the company time and sparing it from some tedious work, said Becky Verano, vice president of global creativity and capabilities at Reckitt.

Because the technology is so new, Ms. Verano said, the team is learning and adjusting its work as new tech companies release updates to the image and language models. She credited Boston Consulting Group with bringing structure to that chaos.

“You’re constantly having to move to the latest trends, to the newest findings, and learning each time how the tools respond,” she said. “There’s not an exact science to it.”



Source link

]]>
https://www.apexnewslive.com/the-a-i-boom-has-an-unlikely-early-winner-wonky-consultants/feed/ 0
Off to Norway, With Three A.I. Travel Assistants https://www.apexnewslive.com/off-to-norway-with-three-a-i-travel-assistants/ https://www.apexnewslive.com/off-to-norway-with-three-a-i-travel-assistants/#respond Wed, 26 Jun 2024 09:15:48 +0000 https://www.apexnewslive.com/off-to-norway-with-three-a-i-travel-assistants/

The assignment was clear: Test how well artificial intelligence could plan a trip to Norway, a place I’d never been. So I did none of my usual obsessive online research and instead asked three A.I. planners to create a four-day itinerary. None of them, alas, mentioned the saunas or the salmon.

Two assistants were, however, eager to learn more about me in order to tailor their initially generic recommendations, which they had spewed out within seconds. Vacay, a personalized travel planning tool, presented me with a list of questions, while Mindtrip, a new A.I. travel assistant, invited me to take a quiz. (ChatGPT, the third assistant, asked nothing.)

Vacay’s and Mindtrip’s questions were similar: Are you traveling solo? What’s your budget? Do you prefer hotels or Airbnbs? Would you rather explore the great outdoors or pursue a cultural experience?

Eventually, my chat sessions yielded what seemed like well-rounded itineraries, starting with one day in Oslo and moving on to the fjord region. Eventually, I locked down a trip that would combine the assistants’ information and go beyond a predictable list of sites.

This time around, my virtual planners were far more sophisticated than the simple ChatGPT interface I used last year on a trip to Milan. Though it offered more detailed suggestions for Norway, I ended up ditching ChatGPT in the travel-planning stage after it repeatedly crashed.

Vacay’s premium service, which starts at $9.99 per month, included in-depth suggestions and booking links, while Mindtrip, which is currently free, provided photos, Google reviews and maps. During the trip itself, each delivered instantaneous information by text and always asked if more specific details were needed. Sadly, only ChatGPT offered a phone app, whose information I found to be outdated (the $20-per-month premium version is more current).

I’m not alone when it comes to turning to A.I. for help: Around 70 percent of Americans are either using or planning to use A.I. for travel planning, according to a recent survey conducted by the Harris Poll on behalf of the personal finance app Moneylion, while 71 percent said using A.I. would most likely be easier than planning trips on one’s own.

I decided to find out for myself in Norway.

After I landed at Oslo Airport, all three assistants directed me to the Flytoget Airport Express Train, which got me to town in 20 minutes. I was delighted to find my hotel adjacent to the central railway station.

Choosing accommodations had not been easy. I was looking for a midrange boutique hotel, and the A.I. assistants generated many options with little overlap. I went with Hotel Amerikalinjen, Vacay’s recommendation, which it described as “a vibrant and unique boutique hotel in the heart of Oslo.” Its location was the main draw, but overall the hotel exceeded my expectations, blending comfort and style with the 20th-century charm of its building, which once housed the headquarters of the Norwegian America Line shipping company.

For the one-day Oslo itinerary, the assistants were in agreement, packing in the city’s top sights, including the Vigeland Sculpture Park, the Royal Palace, the Nobel Peace Center, Akershus Fortress and the Munch Museum. I shared my location and asked each assistant to restructure the itineraries to start from my hotel. But when I gave in to my own research instincts and pulled up Google Maps, I saw that the order they suggested didn’t make sense, so I plotted my own path.

By the time I got to Frogner Park at midday, I had already covered half of the sights, and after walking past more than 200 sculptures by the Norwegian sculptor Gustav Vigeland, I was happy to sit down and admire his granite monolith of entwined humans.

For lunch, the assistants recommended high-end restaurants in the bustling waterfront neighborhood of Aker Brygge. But I wanted a quick bite in a more relaxed atmosphere, so I ditched A.I. and walked to the end of the promenade, where I stumbled upon the Salmon, a cozy establishment where I started with salmon sashimi that melted in my mouth and finished with a perfectly grilled fillet. How had my assistants not mentioned this place?

Next on my list was the Nobel Peace Center, the Opera House and the Munch Museum. The assistants had not recommended prebooking tickets, but fortunately, I had done so, learning, in the process, that the Peace Center was closed, a crucial bit of information that A.I. did not relay.

It was chilly for mid-June, and as I walked along the harbor promenade toward the Munch Museum, I spotted small floating saunas, which my assistants had not included. I went back to the ChatGPT phone app for recommendations. Even though I was eager to try a floating sauna, where people warmed themselves and then plunged straight into the frigid waters of the Oslofjord, I took ChatGPT’s suggestion and booked the Salt sauna, which is where I headed after spending a few hours at the Munch Museum, with its extensive works by the Norwegian artist and its sweeping views of Oslo’s harbor.

At the Salt cultural complex, a large pyramidal structure on the water, I was relieved that swimsuits were a requirement. In Scandinavia, saunas are usually taken naked, and earlier, I had asked ChatGPT for the etiquette at Salt, but it failed to give me a definitive answer. After sweating it out with around 30 strangers in Salt’s main sauna, I dipped into a cold-water barrel tub and then tried the smaller sauna options, which were hotter and quieter. It was the perfect ending to a long day.

Each of my assistants had different ideas on how to reach the fjord region. ChatGPT suggested taking a seven-hour train ride and then immediately embarking on a two-hour fjord cruise, which sounded exhausting. Mindtrip suggested taking a short flight to Bergen, known as the “gateway to the fjords,” and setting out on a cruise the next day, which was perhaps more efficient, but would also mean missing one of the most scenic train rides in the world. Vacay also recommended a train ride.

After conversing with the assistants, I decided on a shorter train journey (six hours) that would deliver me to Naeroyfjord, a UNESCO World Heritage site with lush valleys and thundering waterfalls. But to figure out the logistics for transport and accommodation, I needed live train timetables, which I found on my own, and information on hotel availability that none of the assistants had.

At this point, I was desperate for human guidance to navigate the region’s expensive and limited accommodations. This is where the pictures and reviews on Mindtrip were useful, helping me to understand that I would be paying premium prices for the spectacular setting of a mediocre hotel.

The train ride from Oslo to Myrdal was breathtaking: rolling hills, mountain villages, fjords, waterfalls. But nothing prepared me for the majestic one-hour Flam railway ride that followed. Vacay had described it as an “engineering marvel” with a breathtakingly steep descent as it passes picturesque villages, dramatic mountains, raging rivers and pounding waterfalls, complete with a dance performance featuring a mythological spirit known as a huldra.

The next morning I boarded a Naeroyfjord cruise, recommended by Vacay, on an electric, 400-person vessel. I was surprised by the serenity of the fjord. Later I learned from a tour guide that I had been lucky to visit when there were no large cruise ships. It was hard to imagine an ocean liner maneuvering through the narrow, windy fjord, but when I asked ChatGPT, it told me 150 to 220 cruise ships squeezed through the fjord each year, a detail that I felt the travel assistants should warn travelers about.

The cruise ended in the village of Gudvangen, where rain made me cancel a hike to a waterfall and instead try my hand at ax-throwing in the Viking Village Njardarheim. The assistants had told me that there were buses that left town every four hours, a time frame that had worked with my original hiking plan, but now I was stuck. Thankfully, I took note of the A.I. disclaimers to check all information and found an alternative shuttle bus.

On my way to Bergen, I decided to stop in the town of Voss, famous for extreme sports like skydiving and spectacular nature. All the A.I.-suggested hotels were booked, but a Google search led me to the lakeside Elva hotel, which had delicious farm-to-table food. I suspect it didn’t make the A.I. shortlist because it was new.

I ended my trip in Bergen, which, despite being Norway’s second-largest city, maintains a small-town charm with its colorful wooden houses and cobblestone streets. With only half a day to explore, I followed Mindtrip’s short itinerary, starting with a hearty lunch of fish and chips at the bustling waterfront fish market and ending with a funicular ride up Mount Floyen for panoramic views of the city and fjords. The A.I. dinner suggestion at the Colonialen was perfect: cozy vibe, live jazz and locally sourced dishes.

None of the A.I. programs were perfect, but they did complement one another, allowing me to streamline my travel decisions.

Overall, Mindtrip — with its polished, dynamic interface that allowed me to cross-check details with maps, links and reviews — was my favorite. While it gave some good recommendations, Mindtrip needed more prompting than Vacay, which offered a wider variety of suggestions in more detail. Unfortunately, Vacay doesn’t save chat history, which I discovered halfway into my planning after closing the website’s tab on my browser.

The biggest drawback was the absence of phone apps for Mindtrip and Vacay, which led me to rely on ChatGPT’s basic A.I. assistant when I needed on-the-spot guidance. Mindtrip, I’ve since learned, is planning to debut an app in September.

Still, there were times when I desperately craved the human touch. Before setting out on a trip, I always contact friends and colleagues for recommendations. This time, as part of the A.I. experiment, I refrained from reaching out to a Norwegian friend until after my trip, only to find out that we had both been in Oslo at the same time.

That’s one element of travel that I doubt A.I. will ever master: serendipity.


Follow New York Times Travel on Instagram and sign up for our weekly Travel Dispatch newsletter to get expert tips on traveling smarter and inspiration for your next vacation. Dreaming up a future getaway or just armchair traveling? Check out our 52 Places to Go in 2024.



Source link

]]>
https://www.apexnewslive.com/off-to-norway-with-three-a-i-travel-assistants/feed/ 0
How A.I. Imitates Restaurant Reviews https://www.apexnewslive.com/how-a-i-imitates-restaurant-reviews/ https://www.apexnewslive.com/how-a-i-imitates-restaurant-reviews/#respond Mon, 24 Jun 2024 18:55:32 +0000 https://www.apexnewslive.com/how-a-i-imitates-restaurant-reviews/

The White Clam Pizza at Frank Pepe Pizzeria Napoletana in New Haven, Conn., is a revelation. The crust, kissed by the intense heat of the coal-fired oven, achieves a perfect balance of crispness and chew. Topped with freshly shucked clams, garlic, oregano and a dusting of grated cheese, it is a testament to the magic that simple, high-quality ingredients can conjure.

Sound like me? It’s not. The entire paragraph, except the pizzeria’s name and the city, was generated by GPT-4 in response to a simple prompt asking for a restaurant critique in the style of Pete Wells.

I have a few quibbles. I would never pronounce any food a revelation, or describe heat as a kiss. I don’t believe in magic, and rarely call anything perfect without using “nearly” or some other hedge. But these lazy descriptors are so common in food writing that I imagine many readers barely notice them. I’m unusually attuned to them because whenever I commit a cliché in my copy, I get boxed on the ears by my editor.

He wouldn’t be fooled by the counterfeit Pete. Neither would I. But as much as it pains me to admit, I’d guess that many people would say it’s a four-star fake.

The person responsible for Phony Me is Balazs Kovacs, a professor of organizational behavior at Yale School of Management. In a recent study, he fed a large batch of Yelp reviews to GPT-4, the technology behind ChatGPT, and asked it to imitate them. His test subjects — people — could not tell the difference between genuine reviews and those churned out by artificial intelligence. In fact, they were more likely to think the A.I. reviews were real. (The phenomenon of computer-generated fakes that are more convincing than the real thing is so well known that there’s a name for it: A.I. hyperrealism.)

Dr. Kovacs’s study belongs to a growing body of research suggesting that the latest versions of generative A.I. can pass the Turing test, a scientifically fuzzy but culturally resonant standard. When a computer can dupe us into believing that language it spits out was written by a human, we say it has passed the Turing test.

It’s long been assumed that A.I. would eventually pass the test, first proposed by the mathematician Alan Turing in 1950. But even some experts are surprised by how rapidly the technology is improving. “It’s happening faster than people expected,” Dr. Kovacs said.

The first time Dr. Kovacs asked GPT-4 to mimic Yelp, few were tricked. The prose was too perfect. That changed when Dr. Kovacs instructed the program to use colloquial spellings, emphasize a few words in all caps and insert typos — one or two in each review. This time, GPT-4 passed the Turing test.

Aside from marking a threshold in machine learning, A.I.’s ability to sound just like us has the potential to undermine whatever trust we still have in verbal communications, especially shorter ones. Text messages, emails, comments sections, news articles, social media posts and user reviews will be even more suspect than they already are. Who is going to believe a Yelp post about a pizza-croissant or a glowing OpenTable dispatch about a $400 omakase sushi tasting knowing that its author might be a machine that can neither chew nor swallow?

“With consumer-generated reviews, it’s always been a big question of who’s behind the screen,” said Phoebe Ng, a restaurant communications strategist in New York City. “Now it’s a question of what’s behind the screen.”

Online opinions are the grease in the wheels of modern commerce. In a 2018 survey by the Pew Research Center, 57 percent of the Americans polled said they always or almost always read internet reviews and ratings before buying a product or service for the first time. Another 36 percent said they sometimes did.

For businesses, a few points in a star rating on Google or Yelp can mean the difference between making money and going under. “We live on reviews,” the manager of an Enterprise Rent-a-Car location in Brooklyn told me last week as I picked up a car.

A business traveler who needs a ride that won’t break down on the New Jersey Turnpike may be more swayed by a negative report than, say, somebody just looking for brunch. Still, for restaurant owners and chefs, Yelp, Google, TripAdvisor and other sites that let customers have their say are a source of endless worry and occasional fury.

One special cause of frustration is the large number of people who don’t bother to eat in the place they’re writing about. Before an article on Eater pointed it out last week, the first New York location of the Taiwanese-based dim sum chain Din Tai Fung was being pelted by one-star Google reviews, dragging its average rating down to 3.9 of a possible 5. The restaurant hasn’t opened yet.

Some phantom critics are more sinister. Restaurants have been blasted with one-star reviews, followed by an email offering to take them down in exchange for gift cards.

To fight back against bad-faith slams, some owners enlist their nearest and dearest to flood the zone with positive blurbs. “One question is, how many aliases do all of us in the restaurant industry have?” said Steven Hall, the owner of a New York public-relations firm.

A step up from an organized ballot-stuffing campaign, or maybe a step down, is the practice of trading comped meals or cash for positive write-ups. Beyond that looms the vast and shadowy realm of reviewers who don’t exist.

To hype their own businesses, or kneecap their rivals, companies can hire brokers who have manufactured small armies of fictitious reviewers. According to Kay Dean, a consumer advocate who researches fraud in online reviews, these accounts are usually given an extensive history of past reviews that act as camouflage for their pay-for-play output.

In two recent videos, she pointed out a chain of mental health clinics that had received glowing Yelp reviews ostensibly submitted by satisfied patients whose accounts were littered with restaurant reviews lifted word for word from TripAdvisor.

“It’s an ocean of fakery, and much worse than people realize,” Ms. Dean said. “Consumers are getting duped, honest businesses are being harmed and trust is eroding.”

All this is being done by mere people. But as Dr. Kovacs writes in his study, “the situation now changes substantially because humans will not be required to write authentic-looking reviews.”

Ms. Dean said that if A.I.-generated content infiltrates Yelp, Google and other sites, it will be “even more challenging for consumers to make informed decisions.”

The major sites say they have ways to ferret out Potemkin accounts and other forms of phoniness. Yelp invites users to flag dubious reviews, and after an investigation will take down those found to violate its policies. It also hides reviews that its algorithm deems less trustworthy. Last year, according to its most recent Trust & Safety Report, the company stepped up its use of A.I. “to even better detect and not recommend less helpful and less reliable reviews.”

Dr. Kovacs believes that sites will need to try harder now to show that they aren’t regularly posting the thoughts of robots. They could, for instance, adopt something like the “Verified Purchase” label that Amazon sticks on write-ups of products that were bought or streamed through its site. If readers become even more suspicious of crowdsourced restaurant reviews than they already are, it could be an opportunity for OpenTable and Resy, which accept feedback only from those diners who show up for their reservations.

One thing that probably won’t work is asking computers to analyze the language alone. Dr. Kovacs ran his real and ginned-up Yelp blurbs through programs that are supposed to identify A.I. Like his test subjects, he said, the software “thought the fake ones were real.”

This did not surprise me. I took Dr. Kovacs’s survey myself, confident that I would be able to spot the small, concrete details that a real diner would mention. After clicking a box to certify that I was not a robot, I quickly found myself lost in a wilderness of exclamation points and frowny faces. By the time I’d reached the end of the test, I was only guessing. I correctly identified seven out of 20 reviews, a result somewhere between tossing a coin and asking a monkey.

What tripped me up was that GPT-4 did not fabricate its opinions out of thin air. It stitched them together from bits and pieces of Yelpers’ descriptions of their afternoon snacks and Sunday brunches.

“It’s not totally made up in terms of the things people value and what they care about,” Dr. Kovacs said. “What’s scary is that it can create an experience that looks and smells like real experience, but it’s not.”

By the way, Dr. Kovacs told me that he gave the first draft of his paper to an A.I. editing program, and took many of its suggestions in the final copy.

It probably won’t be long before the idea of a purely human review will seem quaint. The robots will be invited to read over our shoulders, alert us when we’ve used the same adjective too many times, nudge us toward a more active verb. The machines will be our teachers, our editors, our collaborators. They’ll even help us sound human.

Source link

]]>
https://www.apexnewslive.com/how-a-i-imitates-restaurant-reviews/feed/ 0
A.I. Is Getting Better Fast. Can You Tell What’s Real Now? https://www.apexnewslive.com/a-i-is-getting-better-fast-can-you-tell-whats-real-now/ https://www.apexnewslive.com/a-i-is-getting-better-fast-can-you-tell-whats-real-now/#respond Mon, 24 Jun 2024 10:03:58 +0000 https://www.apexnewslive.com/a-i-is-getting-better-fast-can-you-tell-whats-real-now/

Artificial intelligence tools can create lifelike faces and realistic photographs — and they are getting better all the time. The phony images now appear regularly on social media, with many users seeming to believe that the images are real. But there are still some telltale signs that an image was made by A.I.

Can you tell the difference? Take our quiz.

1. Is this celebrity photoshoot real or A.I.?

Oops, not quite. This is a real image. This image shows some stars of the “Justice League” movies — Ben Affleck, Gal Gadot, Henry Cavill and Jason Mamoa. It’s a genuine image but it may look slightly unusual because it was either cropped or compressed after it was reposted several times on social media. The original image is below.

2. What about these singers?

Oops, not quite. This is an A.I.-generated image. This image, showing two singers, resembles a promotional image from a television show. But it is not real. Many A.I. images, including others in this quiz, are often shared on Facebook pages and elsewhere without any indication that they are machine generated.

Some telltale signs of A.I. forgeries are found throughout the image: the girl has just four visible fingers, the woman’s right arm appears to disappear, and people in the background seem to blend into instruments. A.I. image generators still tend to struggle with details like fingers, but they are getting better.

3. And these world leaders?

Oops, not quite. This is a real image. This image shows President Biden with other world leaders at the G7 summit this month. Political misinformation is one of the biggest risks with A.I. tools. Watchdogs have seen a number of A.I. fakes circulating this year, including A.I. videos known as “deepfakes.” But the problem has seemed less pronounced this election season than some anticipated.

A.I. image generators tend to reuse elements repeatedly in their creations, providing a potential clue that they are not real. Here, the nearly identical suits and postures may give it the appearance of A.I.

4. What about this interior?

Oops, not quite. This is an A.I.-generated image. This image might resemble a rustic bathroom, but it was made by A.I. Many social media pages share stunning architecture and interior designs without disclosing they are made by A.I. tools. While they may be convincing at first glance, there are usually telltale signs they are fake.

There are often design elements that defy logic. Here, the shower head appears in a place where it would be impossible to use. Complex elements in the background can be garbled and nonsensical, like a tub faucet that seems to double as a telephone. As in other A.I. images, the text in the artwork appears garbled.

5. How about this man covered in tattoos?

Oops, not quite. This is a real image. It shows Richard Huff, a Californian who has more than 240 tattoos, with his twin boys.

“That is 100 percent real — it was taken at the hospital,” Mr. Huff said in an interview. “My boys are my world.”

6. Is this a real family?

Oops, not quite. This is an A.I.-generated image. It was found circulating on Facebook. Many commenters appeared to believe the image was genuine. The story that accompanied the photo suggested the men were a couple who had overcome adversity to build a family and advance in the armed forces. Many Facebook accounts have flooded the platform with A.I.-generated photos.

7. What about this movie still?

Oops, not quite. This is an A.I.-generated image. It was created for “12 Angry Men,” the award-winning movie from 1957. It appeared on Freevee, a free streaming service owned by Amazon, according to Andy Kelly, a journalist who shared the image on X. Amazon did not respond to requests for comment.

Many of the faces appear highly unusual, with garbled noses and misaligned eyes. Those mistakes were more common in older A.I. image generators, while newer tools have made rapid improvements.

8. Or this image of The Rock?

Oops, not quite. This is an A.I.-generated image. It appears to show Dwayne (The Rock) Johnson in a mall. But it was created by Bobby Griffin, a 28-year-old artist from California known online as GremlinBobby. He used Midjourney, an A.I. image company capable of creating lifelike images. The company has received scrutiny for apparently using copyrighted material to train its A.I. tools, allowing users to create images of celebrities, politicians and other intellectual property.

One giveaway in this image is the badge, which includes garbled text. Many A.I. systems still struggle to create real text, but they are getting better. This image was part of a series by Mr. Griffin showing celebrities in everyday jobs.

9. Is this unusual scene A.I.?

Oops, not quite. This is a real image. It shows performers in “The Outsiders,” a Broadway play with a choreographed fight scene set amid rainfall.

A.I. has excelled at creating unusual or otherworldly images like this, giving social media accounts a new tool to drive engagement and clicks.

10. What about President Biden here?

Oops, not quite. This is an A.I.-generated image. Sensity, a company that detects deepfakes, found this image among a collection of similar fakes circulating online, many depicting President Biden or former President Donald J. Trump in a variety of believable but unlikely situations.

Though the resemblance to President Biden is striking, he would not be wearing military fatigues as a civilian.

Here are your results

You got 0 out of 0 responses correct, for a score of
0%.

Surprised by your results? While not all A.I. tools can produce lifelike images, many can, and they are constantly improving. The fake images can increase the risk that people will be deceived online, and they also risk eroding the public’s trust, making it harder to believe genuine images.

Several social networks have announced plans to apply labels on images that were created by A.I., but those features are rolling out slowly.

Source link

]]>
https://www.apexnewslive.com/a-i-is-getting-better-fast-can-you-tell-whats-real-now/feed/ 0
What the Arrival of A.I. Phones and Computers Means for Our Data https://www.apexnewslive.com/what-the-arrival-of-a-i-phones-and-computers-means-for-our-data/ https://www.apexnewslive.com/what-the-arrival-of-a-i-phones-and-computers-means-for-our-data/#respond Sun, 23 Jun 2024 05:07:31 +0000 https://www.apexnewslive.com/what-the-arrival-of-a-i-phones-and-computers-means-for-our-data/

Apple, Microsoft and Google are heralding a new era of what they describe as artificially intelligent smartphones and computers. The devices, they say, will automate tasks like editing photos and wishing a friend a happy birthday.

But to make that work, these companies need something from you: more data.

In this new paradigm, your Windows computer will take a screenshot of everything you do every few seconds. An iPhone will stitch together information across many apps you use. And an Android phone can listen to a call in real time to alert you to a scam.

Is this information you are willing to share?

This change has significant implications for our privacy. To provide the new bespoke services, the companies and their devices need more persistent, intimate access to our data than before. In the past, the way we used apps and pulled up files and photos on phones and computers was relatively siloed. A.I. needs an overview to connect the dots between what we do across apps, websites and communications, security experts say.

“Do I feel safe giving this information to this company?” Cliff Steinhauer, a director at the National Cybersecurity Alliance, a nonprofit focusing on cybersecurity, said about the companies’ A.I. strategies.

All of this is happening because OpenAI’s ChatGPT upended the tech industry nearly two years ago. Apple, Google, Microsoft and others have since overhauled their product strategies, investing billions in new services under the umbrella term of A.I. They are convinced this new type of computing interface — one that is constantly studying what you are doing to offer assistance — will become indispensable.

The biggest potential security risk with this change stems from a subtle shift happening in the way our new devices work, experts say. Because A.I. can automate complex actions — like scrubbing unwanted objects from a photo — it sometimes requires more computational power than our phones can handle. That means more of our personal data may have to leave our phones to be dealt with elsewhere.

The information is being transmitted to the so-called cloud, a network of servers that are processing the requests. Once information reaches the cloud, it could be seen by others, including company employees, bad actors and government agencies. And while some of our data has always been stored in the cloud, our most deeply personal, intimate data that was once for our eyes only — photos, messages and emails — now may be connected and analyzed by a company on its servers.

The tech companies say they have gone to great lengths to secure people’s data.

For now, it’s important to understand what will happen to our information when we use A.I. tools, so I got more information from the companies on their data practices and interviewed security experts. I plan to wait and see whether the technologies work well enough before deciding whether it’s worth it to share my data.

Here’s what to know.

Apple recently announced Apple Intelligence, a suite of A.I. services and its first major entry into the A.I. race.

The new A.I. services will be built into its fastest iPhones, iPads and Macs starting this fall. People will be able to use it to automatically remove unwanted objects from photos, create summaries of web articles and write responses to text messages and emails. Apple is also overhauling its voice assistant, Siri, to make it more conversational and give it access to data across apps.

During Apple’s conference this month when it introduced Apple Intelligence, the company’s senior vice president of software engineering, Craig Federighi, shared how it could work: Mr. Federighi pulled up an email from a colleague asking him to push back a meeting, but he was supposed to see a play that night starring his daughter. His phone then pulled up his calendar, a document containing details about the play and a maps app to predict whether he would be late to the play if he agreed to a meeting at a later time.

Apple said it was striving to process most of the A.I. data directly on its phones and computers, which would prevent others, including Apple, from having access to the information. But for tasks that have to be pushed to servers, Apple said, it has developed safeguards, including scrambling the data through encryption and immediately deleting it.

Apple has also put measures in place so that its employees do not have access to the data, the company said. Apple also said it would allow security researchers to audit its technology to make sure it was living up to its promises.

But Apple has been unclear about which new Siri requests could be sent to the company’s servers, said Matthew Green, a security researcher and an associate professor of computer science at Johns Hopkins University, who was briefed by Apple on its new technology. Anything that leaves your device is inherently less secure, he said.

Microsoft is bringing A.I. to the old-fashioned laptop.

Last week, it began rolling out Windows computers called Copilot+ PC, which start at $1,000. The computers contain a new type of chip and other gear that Microsoft says will keep your data private and secure. The PCs can generate images and rewrite documents, among other new A.I.-powered features.

The company also introduced Recall, a new system to help users quickly find documents and files they have worked on, emails they have read or websites they have browsed. Microsoft compares Recall to having a photographic memory built into your PC.

To use it, you can type casual phrases, such as “I’m thinking of a video call I had with Joe recently when he was holding an ‘I Love New York’ coffee mug.” The computer will then retrieve the recording of the video call containing those details.

To accomplish this, Recall takes screenshots every five seconds of what the user is doing on the machine and compiles those images into a searchable database. The snapshots are stored and analyzed directly on the PC, so the data is not reviewed by Microsoft or used to improve its A.I., the company said.

Still, security researchers warned about potential risks, explaining that the data could easily expose everything you’ve ever typed or viewed if it was hacked. In response, Microsoft, which had intended to roll out Recall last week, postponed its release indefinitely.

The PCs come outfitted with Microsoft’s new Windows 11 operating system. It has multiple layers of security, said David Weston, a company executive overseeing security.

Google last month also announced a suite of A.I. services.

One of its biggest reveals was a new A.I.-powered scam detector for phone calls. The tool listens to phone calls in real time, and if the caller sounds like a potential scammer (for instance, if the caller asks for a banking PIN), the company notifies you. Google said people would have to activate the scam detector, which is completely operated by the phone. That means Google will not listen to the calls.

Google announced another feature, Ask Photos, that does require sending information to the company’s servers. Users can ask questions like “When did my daughter learn to swim?” to surface the first images of their child swimming.

Google said its workers could, in rare cases, review the Ask Photos conversations and photo data to address abuse or harm, and the information might also be used to help improve its photos app. To put it another way, your question and the photo of your child swimming could be used to help other parents find images of their children swimming.

Google said its cloud was locked down with security technologies like encryption and protocols to limit employee access to data.

“Our privacy-protecting approach applies to our A.I. features, no matter if they are powered on-device or in the cloud,” Suzanne Frey, a Google executive overseeing trust and privacy, said in a statement.

But Mr. Green, the security researcher, said Google’s approach to A.I. privacy felt relatively opaque.

“I don’t like the idea that my very personal photos and very personal searches are going out to a cloud that isn’t under my control,” he said.

Source link

]]>
https://www.apexnewslive.com/what-the-arrival-of-a-i-phones-and-computers-means-for-our-data/feed/ 0
260 McNuggets? McDonald’s Ends A.I. Drive-Through Tests Amid Errors https://www.apexnewslive.com/260-mcnuggets-mcdonalds-ends-a-i-drive-through-tests-amid-errors/ https://www.apexnewslive.com/260-mcnuggets-mcdonalds-ends-a-i-drive-through-tests-amid-errors/#respond Fri, 21 Jun 2024 14:15:46 +0000 https://www.apexnewslive.com/260-mcnuggets-mcdonalds-ends-a-i-drive-through-tests-amid-errors/

In the nearly three years since McDonald’s announced that it was partnering with IBM to develop a drive-through order taker powered by artificial intelligence, videos popped up on social media showing confused and frustrated customers trying to correct comically inaccurate meals.

“Stop! Stop! Stop!” two friends screamed with humorous anguish on a TikTok video as an A.I. drive-through misunderstands their order, tallying up 240, 250 and then 260 Chicken McNuggets.

In other videos, the A.I. rings up a customer for nine iced teas instead of one, fails to explain why a customer could not order Mountain Dew and thought another wanted to add bacon to his ice cream.

So when McDonald’s announced in a June 13 internal email, obtained by the trade publication Restaurant Business, that it was ending its partnership with IBM and shutting down its A.I. tests at more than 100 U.S. drive-throughs, customers who had interacted with the service were probably not shocked.

The decision to abandon the IBM deal comes as many other businesses, including its competitors, are investing in A.I. But it exemplifies some of the challenges companies are facing as they jockey to unlock the revolutionary technology’s potential.

Other fast-food companies have had success with A.I. ordering. Last year, Wendy’s formed a partnership with Google Cloud to build out its A.I. drive-through system. Carl’s Jr. and Taco John’s have hired Presto, a voice A.I. firm for restaurants. Panda Express has approximately 30 automated order takers at its windows through a partnership with the voice A.I. firm SoundHound AI.

Another SoundHound partner, White Castle, has A.I. assistants taking orders in 15 drive-throughs and plans to roll out 100 more, spokeswomen for the two companies said. The technology completes almost 90 percent of orders without human involvement, works efficiently with staff and reduces wait times for customers during rush hour, Jamie Richardson, a vice president at White Castle, said.

“It’s great for customers; it’s equally great for team members,” he told The New York Times. “I am not able to speculate why others wouldn’t invest in similar technology but we’ve been really happy with ours.”

Keyvan Mohajer, the chief executive and co-founder of SoundHound, thinks the departure by McDonald’s is simply an example of a failed partnership.

“It was very clear that they are abandoning IBM, they are not abandoning voice A.I.,” he said. “They are very quickly pursuing other vendors.”

McDonald’s confirmed its intention to eventually return to this technology, writing in the internal email that “a voice-ordering solution” would be in the chain’s future.

In a statement, IBM said it looks forward to continuing to working with McDonald’s, adding that it is “in discussions and pilots” with several restaurants that are interested in building out their automated order technology. McDonald’s confirmed the termination of its A.I. drive-throughs to The Times, but neither company would answer more specific questions.

Several researchers and experts in the industry see the McDonald’s exit as an example of how the new technology is not yet meeting expectations. They doubted that the company would make a speedy return to testing A.I. ordering in its drive-throughs.“A.I. systems often have this very large upfront cost,” said Neil Thompson, the director of FutureTech, a research project at M.I.T.’s computer science and artificial intelligence laboratory. (FutureTech has worked with IBM but Mr. Thompson said that he had no inside knowledge of the deal with McDonald’s).

Currently, voice A.I. is inaccurate often enough that it requires some level of human oversight, which decreases cost savings, Mr. Thompson said. And McDonald’s has a strong alternative offering with higher profit margins: its mobile app.

“The app saves 100 percent of that labor involved in taking that order in a way these A.I. systems, at least currently, are not able to do for them,” Mr. Thompson said. “That makes it just much more economically attractive for them to be using the app than to be using the A.I.”

McDonald’s has not ditched all of its A.I. investments. In December 2023, the company announced that it was working with Google Cloud. A spokesman for the tech giant said it would be applied to “business use cases,” declining to be more specific.

Alex Imas, a behavioral science and economics professor at the University of Chicago, predicted that McDonald’s will watch from the sidelines as its competitors explore the technology.

The McDonald’s business model is not based on saving on the cost of a few drive-through workers, Mr. Imas said. “I think they are going to want to wait and make sure this thing is ready for commercial use.”

He expects McDonald’s to use A.I. in other ways, perhaps by following the example of Target, which recently announced that it was using the technology to assist its employees.

Gee Lefevre, the interim chief executive of Presto, acknowledged that the technology is very new — “less than 0.5 percent of all U.S. drive-throughs” are testing the use of A.I. to take voice orders, he said.

But he also noted that many early attempts have been successful.

Wendy’s, in an email to The Times, said that its A.I. drive-throughs operate without human help on 86 percent of orders. And Presto has had a roughly 90 percent rate with most of its clients, Mr. Lefevre said.

He believes McDonald’s struggled because it used the wrong type of A.I.

“The IBM model was still based on natural-language understanding,” Mr. Lefevre said, explaining that the model works like a tree. When the A.I. hears the customer’s order, it has a limited number of branches to follow that dictate its responses and actions.

This works really well when everything is going right, Mr. Lefevre said. But in a drive-through, where indecisive customers frequently change their orders, he said, chains would be better off using the type of large-language model that powers chatbots like ChatGPT.

As companies continue to test their A.I. drive-through technologies, expect to see more videos of people getting bacon ice cream, condiments instead of food or enough nuggets to feed a sports team.

But ask Mr. Mohajer where voice A.I. is going and he’ll tell you why SoundHound has partnered with car companies like Kia and Jeep.

Picture this.

You’re driving home from work when all the sudden the car asks, “Are you hungry?”

After a few minutes of chatting with your vehicle, you decide on a burger, fries and a shake. The car finds the nearest greasy spoon, places your order for you and plugs in the directions. In three minutes, you pull up and there’s dinner, sitting patiently in a pickup lane, waiting for you to arrive.



Source link

]]>
https://www.apexnewslive.com/260-mcnuggets-mcdonalds-ends-a-i-drive-through-tests-amid-errors/feed/ 0