YouTube Says Computers Are Catching Problem Videos


Figuring out how to remove unwanted videos — and balancing that with free speech — is a major challenge for the future of YouTube, said Eileen Donahoe, executive director at Stanford University’s Global Digital Policy Incubator.

“It’s basically free expression on one side and the quality of discourse that’s beneficial to society on the other side,” Ms. Donahoe said. “It’s a hard problem to solve.”

YouTube declined to disclose whether the number of videos it had removed had increased from the previous quarter or what percentage of its total uploads those 8.28 million videos represented. But the company said the takedowns represented “a fraction of a percent” of YouTube’s total views during the quarter.

Photo

Google said last year it would hire 10,000 people to address policy violations across its platforms. YouTube said on Monday that it had filled a majority of the jobs that had been allotted to it.

Credit
Roger Kisby for The New York Times

Betting on improvements in artificial intelligence is a common Silicon Valley approach to dealing with problematic content; Facebook has also said it is counting on A.I. tools to detect fake accounts and fake news on its platform. But critics have warned against depending too heavily on computers to replace human judgment.

It is not easy for a machine to tell the difference between, for example, a video of a real shooting and a scene from a movie. And some videos slip through the cracks, with embarrassing results. Last year, parents complained that violent or provocative videos were finding their way to YouTube Kids, an app that is supposed to contain only child-friendly content that has automatically been filtered from the main YouTube site.

YouTube has contended that the volume of videos uploaded to the site is too big of a challenge to rely only on human monitors.

Still, in December, Google said it was hiring 10,000 people in 2018 to address policy violations across its platforms. In a blog post on Monday, YouTube said it had filled the majority of the jobs that had been allotted to it, including specialists with expertise in violent extremism, counterterrorism and human rights, as well as expanding regional teams. It was not clear what YouTube’s final share of the total would be.

Still, YouTube said three-quarters of all videos flagged by computers had been removed before anyone had a chance to watch them.

The company’s machines can detect when a person tries to upload a video that has already been taken down and will prevent that video from reappearing on the site. And in some cases with videos containing nudity or misleading content, YouTube said its computer systems are adept enough to delete the video without requiring a human to review the decision.

The company said its machines are also getting better at spotting violent extremist videos, which tend to be harder to identify and have fairly small audiences.

At the start of 2017, before YouTube introduced so-called machine-learning technology to help computers identify videos associated with violent extremists, 8 percent of videos flagged and removed for that kind of content had fewer than 10 views. In the first quarter of 2018, the company said, more than half of the videos flagged and removed for violent extremism had fewer than 10 views.

Even so, users still play a meaningful role in identifying problematic content. The top three reasons users flagged videos during the quarter involved content they considered sexual, misleading or spam, and hateful or abusive.

YouTube said users had raised 30 million flags on roughly 9.3 million videos during the quarter. In total, 1.5 million videos were removed after first being flagged by users.

Continue reading the main story

A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit


“There is a mountain of demand and a trickle of supply,” said Chris Nicholson, the chief executive and founder of Skymind, a start-up working on A.I.

That raises significant issues for universities and governments. They also need A.I. expertise, both to teach the next generation of researchers and to put these technologies into practice in everything from the military to drug discovery. But they could never match the salaries being paid in the private sector.

In 2015, Elon Musk, the chief executive of the electric-car maker Tesla, and other well-known figures in the tech industry created OpenAI and moved it into offices just north of Silicon Valley in San Francisco. They recruited several researchers with experience at Google and Facebook, two of the companies leading an industrywide push into artificial intelligence.

In addition to salaries and signing bonuses, the internet giants typically compensate employees with sizable stock options — something that OpenAI does not do. But it has a recruiting message that appeals to idealists: It will share much of its work with the outside world, and it will consciously avoid creating technology that could be a danger to people.

“I turned down offers for multiple times the dollar amount I accepted at OpenAI,” Mr. Sutskever said. “Others did the same.” He said he expected salaries at OpenAI to increase as the organization pursued its “mission of ensuring powerful A.I. benefits all of humanity.”

OpenAI spent about $11 million in its first year, with more than $7 million going to salaries and other employee benefits. It employed 52 people in 2016.

Photo

An old video game used for training an autonomous system at OpenAI, a nonprofit lab in San Francisco.

Credit
Christie Hemm Klok for The New York Times

People who work at major tech companies or have entertained job offers from them have told The New York Times that A.I. specialists with little or no industry experience can make between $300,000 and $500,000 a year in salary and stock. Top names can receive compensation packages that extend into the millions.

“The amount of money was borderline crazy,” Wojciech Zaremba, a researcher who joined OpenAI after internships at Google and Facebook, told Wired. While he would not reveal exact numbers, Mr. Zaremba said big tech companies were offering him two or three times what he believed his real market value was.

At DeepMind, a London A.I. lab now owned by Google, costs for 400 employees totaled $138 million in 2016, according to the company’s annual financial filings in Britain. That translates to $345,000 per employee, including researchers and other staff.

Researchers like Mr. Sutskever specialize in what are called neural networks, complex algorithms that learn tasks by analyzing vast amounts of data. They are used in everything from digital assistants in smartphones to self-driving cars.

Some researchers may command higher pay because their names carry weight across the A.I. community and they can help recruit other researchers.

Mr. Sutskever was part of a three-researcher team at the University of Toronto that created key so-called computer vision technology. Mr. Goodfellow invented a technique that allows machines to create fake digital photos that are nearly indistinguishable from the real thing.

“When you hire a star, you are not just hiring a star,” Mr. Nicholson of the start-up Skymind said. “You are hiring everyone they attract. And you are paying for all the publicity they will attract.”

Other researchers at OpenAI, including Greg Brockman, who leads the lab alongside Mr. Sutskever, did not receive such high salaries during the lab’s first year.

In 2016, according to the tax forms, Mr. Brockman, who had served as chief technology officer at the financial technology start-up Stripe, made $175,000. As one of the founders of the organization, however, he most likely took a salary below market value. Two other researchers with more experience in the field — though still very young — made between $275,000 and $300,000 in salary alone in 2016, according to the forms.

Though the pool of available A.I. researchers is growing, it is not growing fast enough. “If anything, demand for that talent is growing faster than the supply of new researchers, because A.I. is moving from early adopters to wider use,” Mr. Nicholson said.

That means it can be hard for companies to hold on to their talent. Last year, after only 11 months at OpenAI, Mr. Goodfellow returned to Google. Mr. Abbeel and two other researchers left the lab to create a robotics start-up, Embodied Intelligence. (Mr. Abbeel has since signed back on as a part-time adviser to OpenAI.) And another researcher, Andrej Karpathy, left to become the head of A.I. at Tesla, which is also building autonomous driving technology.

In essence, Mr. Musk was poaching his own talent. Since then, he has stepped down from the OpenAI board, with the lab saying this would allow him to “eliminate a potential future conflict.”

Continue reading the main story

Trilobites: How Do You Count Endangered Species? Look to the Stars


But cameras made for daylight can miss animals or poachers moving through vegetation, and the devices don’t work at night. Infrared cameras can help: Dr. Wich had been using them for decades to study orangutans.

These cameras yield large amounts of footage that can’t be analyzed fast enough. So what do animals and stars have in common? They both emit heat. And much like stars, every species has a recognizable thermal footprint.

“They look like really bright, shining objects in the infrared footage,” said Dr. Burke. So the software used to find stars and galaxies in space can be used to seek out thermal footprints and the animals that produce them.

To build up a reference library of different animals in various environments, the team is working with a safari park and zoo to film and photograph animals. With these thermal images — and they’ll need thousands — they’ll be able to better calibrate algorithms to identify target species in ecosystems around the world.

Photo

Rhinos observed as part of the tests. The researchers found that, like stars, animals have a recognizable thermal footprint.

Credit
Endangered Wildlife Trust/LJMU

The experts started with cows and humans in England. On a sunny, summer day in 2015, the team flew their drones over a farm to see if their machine-learning algorithms could locate the animals in infrared footage.

For the most part, they could.

But accuracy was compromised when drones flew too high, cows huddled together, or roads and rocks heated up in the sun. In a later test, the machines occasionally mistook hot rocks for students pretending to be poachers hiding in the bush.

Last September, the scientists honed their tools in the first field test in South Africa. There, they found five Riverine rabbits in a relatively small area. These shy rodents are among the world’s most endangered mammals. Only a thousand have ever been spotted by people.

The tests helped the scientists calculate an optimal height to fly the drones. The team also learned that animals change shape in real time (rocks don’t) as drones fly over. And the researchers found that rain, humidity and other environmental, atmospheric and weather conditions can interfere with proper imaging.

The scientists are refining their system to account for these issues. And in two years, Dr. Burke said, they plan to have a fully automatic prototype ready for testing. Within five years, she hopes to sell systems at cost — today, just around $15,000.

In the meantime, these astro-ecologists are also working with search and rescue groups to help find people lost at sea or in fog. And starting in May, they will collaborate with conservation groups and other universities to look for orangutans and spider monkeys in the dense forests of Malaysia and Mexico, as well as for river dolphins in Brazil’s murky Amazon River.

Continue reading the main story

The Story of a Voice: HAL in ‘2001’ Wasn’t Always So Eerily Calm


Even when Kubrick was making the film, the director sensed HAL’s larger implications. He said in a 1969 interview with the author and critic Joseph Gelmis that one of the things he was trying to convey was “the reality of a world populated — as ours soon will be — by machine entities that have as much, or more, intelligence as human beings. We wanted to stimulate people to think what it would be like to share a planet with such creatures.”

So how was this particular creature created?

The “2001” historian David Larson said that “Kubrick came up with the final HAL voice very late in the process. It was determined during ‘2001’ planning that in the future the large majority of computer command and communication inputs would be via voice, rather than via typewriter.”

But artificial intelligence was decades from a convincing facsimile of a human voice — and who was to say how a computer should sound anyway?

To play HAL, Kubrick settled on Martin Balsam, who had won the best supporting actor Oscar for “A Thousand Clowns.” Perhaps there was a satisfying echo that appealed to Kubrick — both were from the Bronx and sounded like it. In August 1966, Balsam told a journalist: “I’m not actually seen in the picture at any time, but I sure create a lot of excitement projecting my voice through that machine. And I’m getting an Academy Award winner price for doing it, too.”

Adam Balsam, the actor’s son, told me that “Kubrick had him record it very realistically and humanly, complete with crying during the scene when HAL’s memory is being removed.”

Then the director changed his mind. “We had some difficulty deciding exactly what HAL should sound like, and Marty just sounded a little bit too colloquially American,” Kubrick said in the 1969 interview. Mr. Rain recalls Kubrick telling him, “I’m having trouble with what I’ve got in the can. Would you play the computer?”

Kubrick had heard Mr. Rain’s voice in the 1960 documentary “Universe,” a film he watched at least 95 times, according to the actor. “I think he’s perfect,” Kubrick wrote to a colleague in a letter preserved in the director’s archive. “The voice is neither patronizing, nor is it intimidating, nor is it pompous, overly dramatic or actorish. Despite this, it is interesting.”

Photo

Douglas Rain at the Stratford Festival in Canada in 1968. The year before, he recorded HAL’s voice for Stanley Kubrick.

Credit
Doug Griffin/Toronto Star, via Getty Images

In December 1967, Kubrick and Mr. Rain met at a recording studio at the MGM lot in Borehamwood, outside London.

The actor hadn’t seen a frame of the film, then still deep in postproduction. He met none of his co-stars, not even Keir Dullea, who played the astronaut David Bowman, HAL’s colleague turned nemesis. The cast members had long since completed their work, getting HAL’s lines fed to them by a range of people, including the actress Stefanie Powers. Mr. Rain hadn’t even been hired to play HAL, but to provide narration. Kubrick finally decided against using narration, opting for the ambiguity that was enraging to some viewers, transcendent to others.

It’s not a session Mr. Rain remembers fondly: “If you could have been a ghost at the recording you would have thought it was a load of rubbish.”

Kubrick was attracted to Mr. Rain for the role partly because the actor “had the kind of bland mid-Atlantic accent we felt was right for the part,” he said in the 1969 interview with Mr. Gelmis. But Mr. Rain’s accent isn’t mid-Atlantic at all; it’s Standard Canadian English.

As the University of Toronto linguistics professor Jack Chambers explained: “You have to have a computer that sounds like he’s from nowhere, or, rather, from no specific place. Standard Canadian English sounds ‘normal’ — that’s why Canadians are well received in the United States as anchormen and reporters, because the vowels don’t give away the region they come from.”

Mr. Rain had played an astonishing range of characters in almost 80 productions at the Stratford Festival in Ontario over 45 years, understudying Alec Guinness in “Richard III” in 1953 and going on to play Macbeth, King Lear and Humpty Dumpty. Sexy, intimidating, folksy, sly or persuasive, he could deliver whatever a role needed.

Mr. Rain had to quickly fathom and flesh out HAL, recording all of his lines in 10 hours over two days. Kubrick sat “three feet away, explaining the scenes to me and reading all the parts.”

Kubrick, according to the transcript of the session in his archive at the University of the Arts London, gave Mr. Rain only a few notes of direction, including:

— “Sound a little more like it’s a peculiar request.”

— “A little more concerned.”

— “Just try it closer and more depressed.”

Though HAL has ice water in his digital veins, he exudes a dry wit and superciliousness that makes me wonder why someone would deliberately program a computer to talk this way. Maybe we should worry about A.I.

When HAL says, “I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal,” Mr. Rain somehow manages to sound both sincere and not reassuring. And his delivery of the line “I think you know what the problem is just as well as I do” has the sarcastic drip of a drawing-room melodrama and also carries the disinterested vibe of a polite sociopath.

Kubrick had Mr. Rain sing the 1892 love song “Daisy Bell” (“I’m half crazy, all for the love of you”) almost 50 times, in uneven tempos, in monotone, at different pitches and even just by humming it. In the end, he used the very first take. Sung as HAL’s brain is being disconnected, it’s from his early programming days, his computer childhood. It brings to an end the most affecting scene in the entire film.

Scott Brave said the moment “is so powerful that you feel very uncomfortable; all of sudden HAL feels incredibly close to being alive and being human. You start to empathize with that experience, and you are responding to the death of a machine.”

For a character that’s been endlessly caricatured — in “The Simpsons,” “South Park,” television commercials — HAL has inspired a surprisingly rich range of adjectives over the years. He and his voice have been described as aloof, eerily neutral, silky, wheedling, controlled, baleful, unisex, droll, soft, conversational, dreamy, supremely calm and rational. He’s discursive, suave, inhumanly cool, confident, superior, deadpan, sinister, patronizing and asexual.

Anthony Hopkins has said it influenced his performance as the serial killer Hannibal Lecter in “The Silence of the Lambs.” Douglas Rain himself has never seen “2001: A Space Odyssey.” For the retired actor who spent decades at the Stratford Festival and turns 90 in May, the performance was simply a job.

A.I. voice synthesis can’t yet deliver a performance as compelling as his HAL, but it is becoming more … human. The HAL era is almost over: Soon, an A.I. voice will be able to sound like whoever you want it to. In Canada, even Alexa has a Canadian accent.

Continue reading the main story

The Story of a Voice: HAL in ‘2001’ Wasn’t Always So Eerily Calm


Even when Kubrick was making the film, the director sensed HAL’s larger implications. He said in a 1969 interview with the author and critic Joseph Gelmis that one of the things he was trying to convey was “the reality of a world populated — as ours soon will be — by machine entities that have as much, or more, intelligence as human beings. We wanted to stimulate people to think what it would be like to share a planet with such creatures.”

So how was this particular creature created?

The “2001” historian David Larson said that “Kubrick came up with the final HAL voice very late in the process. It was determined during ‘2001’ planning that in the future the large majority of computer command and communication inputs would be via voice, rather than via typewriter.”

But artificial intelligence was decades from a convincing facsimile of a human voice — and who was to say how a computer should sound anyway?

To play HAL, Kubrick settled on Martin Balsam, who had won the best supporting actor Oscar for “A Thousand Clowns.” Perhaps there was a satisfying echo that appealed to Kubrick — both were from the Bronx and sounded like it. In August 1966, Balsam told a journalist: “I’m not actually seen in the picture at any time, but I sure create a lot of excitement projecting my voice through that machine. And I’m getting an Academy Award winner price for doing it, too.”

Adam Balsam, the actor’s son, told me that “Kubrick had him record it very realistically and humanly, complete with crying during the scene when HAL’s memory is being removed.”

Then the director changed his mind. “We had some difficulty deciding exactly what HAL should sound like, and Marty just sounded a little bit too colloquially American,” Kubrick said in the 1969 interview. Mr. Rain recalls Kubrick telling him, “I’m having trouble with what I’ve got in the can. Would you play the computer?”

Kubrick had heard Mr. Rain’s voice in the 1960 documentary “Universe,” a film he watched at least 95 times, according to the actor. “I think he’s perfect,” Kubrick wrote to a colleague in a letter preserved in the director’s archive. “The voice is neither patronizing, nor is it intimidating, nor is it pompous, overly dramatic or actorish. Despite this, it is interesting.”

Photo

Douglas Rain at the Stratford Festival in Canada in 1968. The year before, he recorded HAL’s voice for Stanley Kubrick.

Credit
Doug Griffin/Toronto Star, via Getty Images

In December 1967, Kubrick and Mr. Rain met at a recording studio at the MGM lot in Borehamwood, outside London.

The actor hadn’t seen a frame of the film, then still deep in postproduction. He met none of his co-stars, not even Keir Dullea, who played the astronaut David Bowman, HAL’s colleague turned nemesis. The cast members had long since completed their work, getting HAL’s lines fed to them by a range of people, including the actress Stefanie Powers. Mr. Rain hadn’t even been hired to play HAL, but to provide narration. Kubrick finally decided against using narration, opting for the ambiguity that was enraging to some viewers, transcendent to others.

It’s not a session Mr. Rain remembers fondly: “If you could have been a ghost at the recording you would have thought it was a load of rubbish.”

Kubrick was attracted to Mr. Rain for the role partly because the actor “had the kind of bland mid-Atlantic accent we felt was right for the part,” he said in the 1969 interview with Mr. Gelmis. But Mr. Rain’s accent isn’t mid-Atlantic at all; it’s Standard Canadian English.

As the University of Toronto linguistics professor Jack Chambers explained: “You have to have a computer that sounds like he’s from nowhere, or, rather, from no specific place. Standard Canadian English sounds ‘normal’ — that’s why Canadians are well received in the United States as anchormen and reporters, because the vowels don’t give away the region they come from.”

Mr. Rain had played an astonishing range of characters in almost 80 productions at the Stratford Festival in Ontario over 45 years, understudying Alec Guinness in “Richard III” in 1953 and going on to play Macbeth, King Lear and Humpty Dumpty. Sexy, intimidating, folksy, sly or persuasive, he could deliver whatever a role needed.

Mr. Rain had to quickly fathom and flesh out HAL, recording all of his lines in 10 hours over two days. Kubrick sat “three feet away, explaining the scenes to me and reading all the parts.”

Kubrick, according to the transcript of the session in his archive at the University of the Arts London, gave Mr. Rain only a few notes of direction, including:

— “Sound a little more like it’s a peculiar request.”

— “A little more concerned.”

— “Just try it closer and more depressed.”

Though HAL has ice water in his digital veins, he exudes a dry wit and superciliousness that makes me wonder why someone would deliberately program a computer to talk this way. Maybe we should worry about A.I.

When HAL says, “I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal,” Mr. Rain somehow manages to sound both sincere and not reassuring. And his delivery of the line “I think you know what the problem is just as well as I do” has the sarcastic drip of a drawing-room melodrama and also carries the disinterested vibe of a polite sociopath.

Kubrick had Mr. Rain sing the 1892 love song “Daisy Bell” (“I’m half crazy, all for the love of you”) almost 50 times, in uneven tempos, in monotone, at different pitches and even just by humming it. In the end, he used the very first take. Sung as HAL’s brain is being disconnected, it’s from his early programming days, his computer childhood. It brings to an end the most affecting scene in the entire film.

Scott Brave said the moment “is so powerful that you feel very uncomfortable; all of sudden HAL feels incredibly close to being alive and being human. You start to empathize with that experience, and you are responding to the death of a machine.”

For a character that’s been endlessly caricatured — in “The Simpsons,” “South Park,” television commercials — HAL has inspired a surprisingly rich range of adjectives over the years. He and his voice have been described as aloof, eerily neutral, silky, wheedling, controlled, baleful, unisex, droll, soft, conversational, dreamy, supremely calm and rational. He’s discursive, suave, inhumanly cool, confident, superior, deadpan, sinister, patronizing and asexual.

Anthony Hopkins has said it influenced his performance as the serial killer Hannibal Lecter in “The Silence of the Lambs.” Douglas Rain himself has never seen “2001: A Space Odyssey.” For the retired actor who spent decades at the Stratford Festival and turns 90 in May, the performance was simply a job.

A.I. voice synthesis can’t yet deliver a performance as compelling as his HAL, but it is becoming more … human. The HAL era is almost over: Soon, an A.I. voice will be able to sound like whoever you want it to. In Canada, even Alexa has a Canadian accent.

Continue reading the main story

The Story of a Voice: HAL in ‘2001’ Wasn’t Always So Eerily Calm


Even when Kubrick was making the film, the director sensed HAL’s larger implications. He said in a 1969 interview with the author and critic Joseph Gelmis that one of the things he was trying to convey was “the reality of a world populated — as ours soon will be — by machine entities that have as much, or more, intelligence as human beings. We wanted to stimulate people to think what it would be like to share a planet with such creatures.”

So how was this particular creature created?

The “2001” historian David Larson said that “Kubrick came up with the final HAL voice very late in the process. It was determined during ‘2001’ planning that in the future the large majority of computer command and communication inputs would be via voice, rather than via typewriter.”

But artificial intelligence was decades from a convincing facsimile of a human voice — and who was to say how a computer should sound anyway?

To play HAL, Kubrick settled on Martin Balsam, who had won the best supporting actor Oscar for “A Thousand Clowns.” Perhaps there was a satisfying echo that appealed to Kubrick — both were from the Bronx and sounded like it. In August 1966, Balsam told a journalist: “I’m not actually seen in the picture at any time, but I sure create a lot of excitement projecting my voice through that machine. And I’m getting an Academy Award winner price for doing it, too.”

Adam Balsam, the actor’s son, told me that “Kubrick had him record it very realistically and humanly, complete with crying during the scene when HAL’s memory is being removed.”

Then the director changed his mind. “We had some difficulty deciding exactly what HAL should sound like, and Marty just sounded a little bit too colloquially American,” Kubrick said in the 1969 interview. Mr. Rain recalls Kubrick telling him, “I’m having trouble with what I’ve got in the can. Would you play the computer?”

Kubrick had heard Mr. Rain’s voice in the 1960 documentary “Universe,” a film he watched at least 95 times, according to the actor. “I think he’s perfect,” Kubrick wrote to a colleague in a letter preserved in the director’s archive. “The voice is neither patronizing, nor is it intimidating, nor is it pompous, overly dramatic or actorish. Despite this, it is interesting.”

Photo

Douglas Rain at the Stratford Festival in Canada in 1968. The year before, he recorded HAL’s voice for Stanley Kubrick.

Credit
Doug Griffin/Toronto Star, via Getty Images

In December 1967, Kubrick and Mr. Rain met at a recording studio at the MGM lot in Borehamwood, outside London.

The actor hadn’t seen a frame of the film, then still deep in postproduction. He met none of his co-stars, not even Keir Dullea, who played the astronaut David Bowman, HAL’s colleague turned nemesis. The cast members had long since completed their work, getting HAL’s lines fed to them by a range of people, including the actress Stefanie Powers. Mr. Rain hadn’t even been hired to play HAL, but to provide narration. Kubrick finally decided against using narration, opting for the ambiguity that was enraging to some viewers, transcendent to others.

It’s not a session Mr. Rain remembers fondly: “If you could have been a ghost at the recording you would have thought it was a load of rubbish.”

Kubrick was attracted to Mr. Rain for the role partly because the actor “had the kind of bland mid-Atlantic accent we felt was right for the part,” he said in the 1969 interview with Mr. Gelmis. But Mr. Rain’s accent isn’t mid-Atlantic at all; it’s Standard Canadian English.

As the University of Toronto linguistics professor Jack Chambers explained: “You have to have a computer that sounds like he’s from nowhere, or, rather, from no specific place. Standard Canadian English sounds ‘normal’ — that’s why Canadians are well received in the United States as anchormen and reporters, because the vowels don’t give away the region they come from.”

Mr. Rain had played an astonishing range of characters in almost 80 productions at the Stratford Festival in Ontario over 45 years, understudying Alec Guinness in “Richard III” in 1953 and going on to play Macbeth, King Lear and Humpty Dumpty. Sexy, intimidating, folksy, sly or persuasive, he could deliver whatever a role needed.

Mr. Rain had to quickly fathom and flesh out HAL, recording all of his lines in 10 hours over two days. Kubrick sat “three feet away, explaining the scenes to me and reading all the parts.”

Kubrick, according to the transcript of the session in his archive at the University of the Arts London, gave Mr. Rain only a few notes of direction, including:

— “Sound a little more like it’s a peculiar request.”

— “A little more concerned.”

— “Just try it closer and more depressed.”

Though HAL has ice water in his digital veins, he exudes a dry wit and superciliousness that makes me wonder why someone would deliberately program a computer to talk this way. Maybe we should worry about A.I.

When HAL says, “I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal,” Mr. Rain somehow manages to sound both sincere and not reassuring. And his delivery of the line “I think you know what the problem is just as well as I do” has the sarcastic drip of a drawing-room melodrama and also carries the disinterested vibe of a polite sociopath.

Kubrick had Mr. Rain sing the 1892 love song “Daisy Bell” (“I’m half crazy, all for the love of you”) almost 50 times, in uneven tempos, in monotone, at different pitches and even just by humming it. In the end, he used the very first take. Sung as HAL’s brain is being disconnected, it’s from his early programming days, his computer childhood. It brings to an end the most affecting scene in the entire film.

Scott Brave said the moment “is so powerful that you feel very uncomfortable; all of sudden HAL feels incredibly close to being alive and being human. You start to empathize with that experience, and you are responding to the death of a machine.”

For a character that’s been endlessly caricatured — in “The Simpsons,” “South Park,” television commercials — HAL has inspired a surprisingly rich range of adjectives over the years. He and his voice have been described as aloof, eerily neutral, silky, wheedling, controlled, baleful, unisex, droll, soft, conversational, dreamy, supremely calm and rational. He’s discursive, suave, inhumanly cool, confident, superior, deadpan, sinister, patronizing and asexual.

Anthony Hopkins has said it influenced his performance as the serial killer Hannibal Lecter in “The Silence of the Lambs.” Douglas Rain himself has never seen “2001: A Space Odyssey.” For the retired actor who spent decades at the Stratford Festival and turns 90 in May, the performance was simply a job.

A.I. voice synthesis can’t yet deliver a performance as compelling as his HAL, but it is becoming more … human. The HAL era is almost over: Soon, an A.I. voice will be able to sound like whoever you want it to. In Canada, even Alexa has a Canadian accent.

Continue reading the main story

Microsoft Reorganizes to Fuel Cloud and A.I. Businesses


But with the revamp, the Windows group will be smaller and its engineering efforts dispersed. Windows technology, analysts said, will increasingly be folded into Microsoft’s cloud software. Other engineers will create the user applications — “Windows experiences,” in Microsoft terms — that ride on top of the underlying software, in smartphones, tablets, personal computers and game consoles.

Photo

Terry Myerson, executive vice president of Microsoft’s Windows and devices group, will be departing as part of the reorganization.

Credit
Eric Risberg/Associated Press

Today, cloud services from Amazon, Microsoft and Google have become the internet equivalent of Windows, the dominant operating system of the personal computer era.

Software developers write new applications to run on the cloud services, just as they once did for the Windows operating system. Microsoft has successfully rewritten its popular Office productivity products as web-based applications running on the cloud.

The reorganization is “really doubling down on the cloud as the fundamental platform for Microsoft,” said Ed Anderson, an analyst at Gartner.

Microsoft’s cloud business is powering its growth. In the most recent quarter, its Azure business grew 98 percent and its cloud-based Office 365 offering by 41 percent. By contrast, the division that includes the Windows PC software increased 2 percent.

The formal relegation of the Windows franchise, said Michael Cusumano, a professor at the Massachusetts Institute of Technology’s Sloan School of Management, “has been a long time coming.” And such a transition, Mr. Cusumano said, “probably had to be done by a second or third generation of leader.” Mr. Nadella succeeded Steven A. Ballmer, the longtime ally and friend of Microsoft’s co-founder, Bill Gates.

Beyond the organizational changes, Mr. Nadella said in his email that Microsoft’s research leader, Harry Shum, and president, Brad Smith, have established a panel, the A.I. and Ethics in Engineering and Research Committee, to increase the odds that A.I. technology “benefits the broader society.”

That move, said Patrick Moorhead, an independent analyst, is Microsoft’s effort to show it is “serious about the broader implications of A.I.” at a time of rising concern about the technology’s influence on people’s behavior and as a threat to jobs.

Continue reading the main story

Tech Tip: Chatting Up the Google Assistant


Q. Can I have a conversation with Google Assistant, or will there be an update in the future that will let me have one?

A. Google Assistant, the company’s voice-based helper software that is similar to Amazon’s Alexa, Apple’s Siri and Microsoft’s Cortana software, can already hold basic two-way conversations based on spoken or typed questions and commands. However, more intricate interaction with all these virtual assistant apps is coming as companies expand their research into the scientific areas of artificial intelligence, machine learning and natural language processing. (For those with privacy concerns, keep in mind that most virtual assistant software is designed to collect personal data.)

The Google Assistant software is available as an app for Android and iOS devices, built into Google Home and other speakers, Android-based wearables, cars, televisions, smart-home appliances and other gear. If you are not sure how to talk to the program, the Google Assistant site has a lengthy list of the questions, commands and topics that you can use with the software, complete with suggestions on how to phrase your requests.

You can, for example, tell Google Assistant to remember where you parked your car and then ask the software to remind you of the location later. For those with Google Home speakers, the company recently released a series of Routines, which run through a set of regular daily tasks like adjusting the thermostat and lights before reporting the traffic and weather as you wake up.

When you ask, Google Assistant will start a conversation with third-party chatbot personalities like the Hogwarts Sorting Hat or Bobo the Panda. The Cyber Argument bot on the site can even pit a Google Home speaker against a nearby Amazon Alexa-powered device; for the curious, video clips of chatbots arguing are available, as is an online publication called Chatbots Magazine.

Photo

The Google Assistant can have simple conversations with chatbots like the Harry Potter-inspired Sorting Hat app.

But beyond novelty applications, Google (and the other companies) are pushing to develop conversational user interfaces for their products to make them more useful and able to handle complex sets of tasks. An online guide for Google Assistant developers can give you an idea of how apps are designed to work with the software, and you will most likely see software updates to Google Assistant as the software becomes more advanced.

Amazon, Apple and Microsoft have similar developer programs for their own virtual assistants. To help inspire its developers even more, Amazon last year created the Alexa Prize, a contest to advance conversational artificial intelligence with a financial reward to the research team that creates the best “socialbot” that can “can converse coherently and engagingly with humans on a range of current events and popular topics such as entertainment, sports, politics, technology and fashion.”

Continue reading the main story

Lyft to Bring Driverless Car Tech to Broader Auto Industry


Uber, Lyft’s main rival, has been developing self-driving technology mostly on its own. Waymo, a Lyft partner, is slowly introducing its own ride-hailing service using autonomous Chrysler Pacifica minivans equipped with Waymo’s own hardware and software.

The hype over self-driving cars has made development of the technology challenging for some traditional automakers. Car companies face huge salaries for top artificial intelligence engineers and limited access to data and key components. The competitive landscape is so charged that it has already given birth to at least one high-profile lawsuit.

As a result, automakers are confronting a choice: Pay big for a technology start-up or risk falling behind. Last year, Ford announced it would invest $1 billion in Argo AI, an artificial intelligence start-up focused on developing autonomous vehicle technology. GM acquired Cruise for an estimated $1 billion in cash, stock and incentive packages in 2016, the same year it invested $500 million in Lyft.

Photo

A Magna safety testing site in Germany. Magna supplies a range of driver-assist systems to automakers and also builds entire vehicles for customers like Mercedes-Benz, BMW and Jaguar.

Credit
Kien Hong Le/Bloomberg

After lagging behind Uber, Lyft has recently made a concerted push into self-driving cars. The company opened a research facility in Palo Alto, Calif., and has aggressively recruited engineers.

It also has a major asset for self-driving technology — a ride-hailing network picking up and dropping off passengers 10 million times a week. This provides Lyft with a customer base to introduce and test the vehicles and a way to collect information that can be used to “train” autonomous cars.

But Raj Kapoor, Lyft’s chief strategy officer, said it would be a few years before truly autonomous vehicles were ready for the road.

“I believe this relationship will get us there faster,” Mr. Kapoor said.

Magna, a Canadian auto parts maker, already supplies a wide range of driver-assist technology to its customers, including a system for staying in lanes, automatic emergency braking and rearview cameras. It also builds entire vehicles for customers like Mercedes-Benz, BMW and Jaguar — a capability that has made the company a potential partner for a new entrant like Apple.

Magna has already been working on hardware for self-driving cars, including radar and lidar — an abbreviation for light detection and ranging — that help the vehicles see the world around them.

But Magna said the partnership with Lyft would be essential to helping it push further into autonomous vehicles, combining its automotive and manufacturing experience with Lyft’s ride network to better understand the many situations that a self-driving car will encounter.

“The question isn’t whether autonomous vehicles are going to happen but how long the transition is,” said Swamy Kotagiri, Magna’s chief technology officer.

Under the partnership, Lyft would take the lead in developing self-driving car technology while Magna would oversee manufacturing of the systems. The two companies would share the development costs and the resulting intellectual property.

Lyft compared its strategy of operating its own self-driving cars as well as welcoming driverless cars from other companies to the way Amazon is both a retailer selling products to customers directly and an online virtual mall, providing space for other companies to sell goods on the site.

Continue reading the main story

Alphabet Program Beats the European Human Go Champion


Photo

Demis Hassabis, a former child chess prodigy, is vice president of engineering at Alphabet’s DeepMind and leads Alphabet’s general A.I. efforts.Credit Alphabet

Artificial intelligence researchers are closing in on a new benchmark for comparing the human mind and a machine. On Wednesday, DeepMind, a research organization that operates under the umbrella of Alphabet, reported that a program combining two separate algorithms had soundly defeated a high-ranking professional Go player in a series of five matches.

The result, which appeared in the Jan. 27 edition of the journal Nature, is further evidence of the power created when a class of A.I. machine learning programs known as “deep neural networks” is combined with immense sets of data.

Go is seen as a good test for artificial intelligence researchers because it is more complex than chess, with a far larger range of possible positions. This makes strategy and reasoning in the game more challenging.

Go is played with round black and white stones, and two players alternately place pieces on a square grid with the goal of occupying the most territory. Until recently, software programs had not been able to do better than beat amateur Go players. In the Nature paper, engineers at DeepMind described a program, AlphaGo, that had achieved a 99.8 percent winning rate against other Go programs. It also swept five games from the European Go champion, Fan Hui.

The match between the AlphaGo program and Fan Hui was in October, and the DeepMind program has continued to train since then, said Demis Hassabis, a researcher who founded DeepMind Technologies, which was acquired by Google in 2014. Google changed its name to Alphabet last year, though the company’s traditional ad-based businesses still operate under the Google label.
“The machine has continued to get better. We haven’t hit any kind of ceiling yet on performance,” he said.

The Alphabet approach relies on the newest so-called deep learning approach combined with a more traditional type of algorithm known as a Monte Carlo, which is designed to exhaustively explore large numbers of possible combinations of moves. The researchers said they had also trained their program using input from expert human Go players.

The research and the game have created a rivalry among the public relations departments of companies like Alphabet, Microsoft and Facebook.

The day before the Alphabet paper was published, Facebook republished an earlier paper the company had posted on the arXiv.org website. At the same time, Facebook issued blog posts from Yann LeCun, one of its artificial intelligence researchers, and one from the company’s chief executive, Mark Zuckerberg.

The statement by Mr. Zuckerberg resulted in a swift response from one Facebook user that may express a deeper human concern than the narrow results of the research: “Why don’t you leave that ancient game alone and let it be without any artificial players? Do we really need an A.I. in everything?” wrote Konstantinos Karakasidis.

Those concerns are not likely to be heeded. In a blog post Wednesday morning, Alphabet stated that, in an effort to reprise the winning IBM Deep Blue chess playing program that defeated the chess champion Garry Kasparov in 1996, Alphabet will match its AlphaGo program against Lee Sedol, the current Go champion, for a five-game match in March.

There will be a $1 million prize for the winner, and Mr. Hassabis said that Alphabet would donate the prize to charity if AlphaGo won. The match will be streamed live on YouTube.

Mr. Hassabis, who is a skilled chess player and has been a professional gamer as well, said that Go was a beautiful game, but that “building an A.I. is also a human endeavor and a kind of ingenious one, too. The reason games are used as a testing ground is that they’re kind of like a microcosm of the real world.”