Tech Tip: Let Gmail Finish Your Sentences


TECH TIP

Google’s new machine-learning tools for its mail service can save you time and typos — as long as you are comfortable sharing your thoughts with the software.

Q. The new Gmail feature that lets the software write your mail messages for you sounds intriguing, if not unsettling. How does it work, and has the feature rolled out to regular users so I can see it for myself?

A. The Smart Compose feature of Google’s recent Gmail update does not exactly write your full message for you. The program uses machine learning techniques to evaluate what you are writing — and then suggests what to type next based on that analysis. Gmail’s text suggestions appear in slightly lighter gray type at the end of the sentence you are writing. If you choose to accept the computer-generated words, tap the Tab key to add the material and move on to the next sentence.

Image
Once you enable it in the settings, Gmail’s new Smart Compose feature can finish your sentences for you as you type.CreditThe New York Times

In theory, the Smart Compose tool can speed up your message composition and cut down on typographical errors. While “machine learning” means the software (and not a human) is scanning your work-in-progress to get information for the predictive text function, you are sharing information with Google when you use its products.

Google Strikes Humble Tone While Promoting A.I. Technology


Mr. Pichai said artificial intelligence had uncovered breakthroughs in health care that humans would not have spotted. An artificial intelligence program running on Google’s so-called machine-learning software that helps diagnose eye disease from a retina image found that the same photo could be used to identify cardiovascular risk.

It is the type of meaningful breakthrough that Google executives love to promote, but it has little to do with Google’s core web products or the way it makes money. But even those services are getting an artificially intelligent makeover.

Photo

Developers from Sri Lanka, Bolivia and India were among the visitors at the conference.

Credit
Jim Wilson/The New York Times

The company demonstrated how its Google Assistant computer software is now capable of calling a person at a hair salon or a restaurant to make a reservation. Google said artificial intelligence had allowed the computer’s voice to sound more human — complete with “uhs” and natural pauses, as well as logical follow-up questions — so the person at the other end does not know that he or she is speaking to a computer.

Improvements in A.I. have allowed Google’s computer assistant to have different voices and accents, including the ability later this year to have the singer John Legend tell you the day’s weather.

The company also demonstrated a new artificially intelligent feature in Gmail, called Smart Compose, that starts to suggest complete sentences in email as you type. Google said this would help users complete emails more quickly with fewer spelling and grammar mistakes. It plans to add this feature over the next few weeks.

But one of its most significant A.I. breakthroughs will never be seen by consumers.

Google said it would roll out a new processing chip to power many of its machine-learning programs. A.I. programs require a great deal of computing power, and custom-made chips housed inside data centers to handle this data crunch have fueled an arms race among the tech industry’s biggest companies. Google said its new chip would be eight times more powerful than the chip it introduced last year.

Mark Hung, a research vice president at the research firm Gartner, said the conference demonstrated how much Google relied on A.I. to make its products stand out.

“Almost everything Google is announcing now is A.I. related,” he said. “Google has a lead on artificial intelligence over many of its competitors, and it’s going to use that as a weapon to advance their products forward.”

In keeping with a theme of a more responsible Google, the company also introduced features aimed at addressing how technology is burrowing deeper into our lives — sometimes in negative ways.

Google unveiled a series of “digital well-being” updates in the next version of its Android smartphone software. They include a timer that allows a person to limit time spent on certain apps each day and a Do Not Disturb feature that silences phone calls and notifications, and that can be turned on by placing the smartphone screen face down on a table.

The company is also trying to encourage good manners with its Google Assistant. The new “pretty please” feature, which encourages children to use “please” when asking for assistance, aims to address the concern that children are learning to speak impolitely because they are talking to more digital assistants.

Continue reading the main story

Why A.I. and Cryptocurrency Are Making One Type of Computer Chip Scarce


When the company recently ordered new hardware from a supplier in China, the shipment was delayed by four weeks. And the price of the chips was about 15 percent higher than it had been six months earlier.

“We need the latest G.P.U.s to stay competitive,” Mr. Scott said. “There is a tangible impact to our research work.”

But he did not blame the shortage on other A.I. specialists. He blamed it on cryptocurrency miners. “We have never had this problem before,” he said. “It was only when crypto got hot that we saw a significant slowdown in our ability to get G.P.U.s.”

G.P.U.s were originally designed to render graphics for computer games and other software. In recent years, they have become an essential tool in the creation of artificial intelligence. Almost every A.I. company relies on the chips.

Like Malong, those companies build what are called neural networks, complex algorithms that learn tasks by analyzing vast amounts of data. Large numbers of G.P.U.s, which consume relatively little electrical power and can be packed into a small space, can process the huge amounts of math required by neural networks more efficiently than standard chips.

Speculators in digital currency are snapping up G.P.U.s for a very different purpose. After setting up machines that help run the large computer networks that manage Ethereum and other Bitcoin alternatives, people and businesses can receive payment in the form of newly created digital coins. G.P.U.s are also efficient for processing the math required for this digital mining.

Photo

The Volta graphics processing unit, or G.P.U., made by Nvidia. A boom in artificial intelligence and the rise of cryptocurrencies has created a surge in demand for such chips.

Credit
Christie Hemm Klok for The New York Times

Crypto miners bought 3 million G.P.U. boards — flat panels that can be added to personal and other computers — worth $776 million last year, said Joe Peddie, a researcher who has tracked sales of the chips for decades.

That may not sound like a lot in an overall market worth more than $15 billion, but the combination of A.I. builders and crypto miners — not to mention gamers — has squeezed the G.P.U. supply. Things have gotten so tight that resellers for Nvidia, the Silicon Valley chip maker that produces 70 percent of the G. P.U. boards, often restrict how many a company can buy each day.

“It is a tough moment. We could do more if we had more of these” chips in our data centers, said Kevin Scott, Microsoft’s chief technology officer. “There are real products that could be getting better right now for real users. This is not a theoretical exercise.”

AMD, another G.P.U. supplier, and other companies, say that some of current shortage is a result of a limited worldwide supply of other components on G.P.U. boards, and they note that retail prices have begun to stabilize. But in March, at his company’s annual chip conference in Silicon Valley, Nvidia’s chief executive, Jen-Hsun Huang, indicated that the company still could not produce the chips fast enough.

This has created an opportunity for numerous other chip makers. A company called Bitmain, for instance, has released a new chip specifically for mining Ethereum coins. Google has built its own chip for work on A.I. and is giving other companies access to it through a cloud computing service. Last month, Facebook indicated in a series of online job postings that it, too, was working to build a chip just for A.I.

Dozens of other companies are designing similar chips that take the already specialized G.P.U. into smaller niches, and more companies producing chips means a greater supply and lower prices.

“You want this not just for economic reasons, but for supply chain stability,” said Mr. Scott of Microsoft.

The market will not diversify overnight. Matthew Zeiler, the chief executive and founder of a computer-vision start-up in New York, said the prices of some of the G.P.U. boards that the company uses have risen more than 40 percent since last year.

Mr. Zeiler believes that Nvidia will be very hard to unseat. Many companies will stick with the company’s technology because that is what they are familiar with, and because the G.P.U. boards it provides can do more than one thing.

Kevin Zhang, the founder of ABC Consulting, has bought thousands of G.P.U.s for mining various digital currencies. He said that a chip just for, say, mining Ethereum was not necessarily an attractive option for miners. It cannot be used to mine other currencies, and the groups that run systems like Ethereum often change the underlying technology, which can make dedicated chips useless.

Interest in digital currency mining could cool, of course. But the A.I. and gaming markets will continue to grow.

Mr. Zeiller said that his company had recently bought new G.P.U.s for its data center in New Jersey, but could not install them for more than a month because the computer racks needed to house the chips were in short supply as a result of the same market pressures.

“The demand,” he said, “is definitely crazy.”

Continue reading the main story

After Fatal Uber Crash, a Self-Driving Start-Up Moves Forward


“You don’t succeed by staring in the rearview mirror,” said Andrew Ng, a board member of Drive.ai, who helped found the artificial intelligence labs at Google and the Chinese internet giant Baidu.

Drive.ai said it was moving ahead even as questions about the cause of Uber’s crash remained unanswered. Sarah Abboud, an Uber spokeswoman, declined to comment on specifics, citing an continuing investigation by the National Transportation Safety Board. But she said the company had initiated a “top-to-bottom safety review” and had brought on Christopher A. Hart, a former chairman of the safety board, as an adviser on its “overall safety culture.”

Photo

Inside a Drive.ai autonomous vehicle.

Credit
Cooper Neill for The New York Times

Tarin Ziyaee, until recently the chief technology officer of the self-driving start-up Voyage, said he hoped the Uber crash would push companies to openly discuss the powerful but still limited technologies inside their test cars.

“We need to talk about the nitty-gritty — what these systems are really doing and where their weaknesses are,” said Mr. Ziyaee, who also worked on autonomous systems at Apple. “These companies are putting secrecy over safety. That has to change. The public deserves to know how things work.”

Mr. Ng said the Uber crash had not affected Drive.ai’s rollout plans. “We’re focused on the path forward,” he said.

Drive.ai was founded in 2015 by Mr. Ng’s wife, Carol Reiley, a roboticist, and several students who worked in a Stanford University A.I. lab overseen by Mr. Ng. The start-up specializes in a rapidly progressing type of artificial intelligence called deep learning, which allows systems to learn tasks by analyzing vast amounts of data.

Venture capital firms including New Enterprise Associates have since invested in the start-up. Based in Mountain View, Calif., Drive.ai has raised $77 million and has more than 100 employees.

Waymo, the autonomous vehicle company that was spun out of Google, is already running a private taxi service outside Phoenix, in a state that is a popular destination for self-driving car experiments. Drive.ai chose to begin its trials in Frisco, where the streets are clean and wide, pedestrian traffic is light and the sun is out for 230 days a year, on average. A Texas law passed in the fall also lets companies operate self-driving services with no restrictions from municipal governments.

When Drive.ai’s free, daytime-only service begins this summer, it will be open to 10,000 people who live or work in the area. The cars will travel along a few miles of road where the speed limit does not exceed 45 miles an hour, with passengers being picked up and dropped off at only a few specific locations.

Backup drivers will be behind the wheel, taking control when needed. But as the program expands, Drive.ai plans on moving drivers into the passenger seat and out of the cars entirely by the end of the year.

Photo

Backup drivers will initially be behind the wheel for Drive.ai, but they will later be moved into the passenger seat and eventually out of the cars entirely.

Credit
Cooper Neill for The New York Times

Though pedestrians are scarce in the area, the cars will drive through parking lots where they are likely to encounter foot traffic. So Drive.ai equipped its cars with digital displays designed to communicate with pedestrians and other drivers. While an autonomous vehicle cannot make eye contact with a pedestrian or respond to hand signals, it can display a simple message like “Waiting for you to cross” or “Picking up.”

Because the cars are equipped with sensors that gather information about their surroundings by sending out pulses of light — as well as radar and an array of cameras — the cars could potentially operate at night as well. But the start-up decided to keep a tight rein on its service before gradually expanding the route and exposing the cars to new conditions. Drive.ai said it would suspend operations during a downpour and in the rare event of snow.

There will still be situations where the cars are slow to make decisions on their own — in the face of extremely heavy traffic, for instance — but remote technicians employed by Drive.ai will send help to the cars over the internet. The cars will include connections to three separate cellular networks.

Drive.ai said it was working closely with Frisco officials. The city of 175,000 can keep the company abreast of construction zones and other road changes, Mr. Ng said, and signs identifying the area where the cars will drive have been installed.

Thomas Bamonte, a senior program manager for automated vehicles with the North Central Texas Council of Governments, which handles planning for Dallas and surrounding areas, said such work would become increasingly important as the metropolitan area added roughly a million new people every 10 years.

“We want to invest in new technology rather than the physical expansion of roadways,” he said.

Asked if the Uber crash gave him pause, he said state law allowed companies like Drive.ai to operate without interference from local governments. The companies, he said, must be cautious.

Noah Marshall, a financial analyst with Jamba Juice, which is based in Frisco, said the new autonomous taxi service would be a “great thing” for the town. His office is along Drive.ai’s route, and he said he hoped to try the service.

Other Frisco residents were warier.

“This might be a good idea, but there is so much traffic here, and Texans aren’t very patient,” said Mark Mulch, a local real estate agent. Referring to one Arizona city where self-driving cars are being tested, he added: “Scottsdale is laid back. But Dallas is too fast.”

Continue reading the main story

Facebook Adds A.I. Labs in Seattle and Pittsburgh, Pressuring Local Universities


“It is worrisome that they are eating the seed corn,” said Dan Weld, a computer science professor at the University of Washington. “If we lose all our faculty, it will be hard to keep preparing the next generation of researchers.”

With the new labs, Facebook — which already operates A.I. labs in Silicon Valley, New York, Paris and Montreal — is establishing two new fronts in a global competition for talent.

Over the last five years, artificial intelligence has been added to a number of tech products, from digital assistants and online translation services to self-driving vehicles. And the world’s largest internet companies, from Google to Microsoft to Baidu, are jockeying for researchers who specialize in these technologies. Many of them are coming from academia.

“We’re basically going where the talent is,” Mr. Schroepfer said.

But the supply of talent is not keeping up with demand, and salaries have skyrocketed. Well-known researchers are receiving compensation in salary, bonuses and stock worth millions of dollars. Many in the field worry that the talent drain from academia could have a lasting impact in the United States and other countries, simply because schools won’t have the teachers they need to educate the next generation of A.I. experts.

Over the last few months, Facebook approached a number of notable researchers in Seattle. It hired Luke Zettlemoyer, a professor at the University of Washington who specializes in technology that aims to understand and use natural human language, the company confirmed. This is an important area of research for Facebook as it struggles to identify and remove false and malicious content on its networks.

In the fall, Mr. Zettlemoyer told The New York Times that he had turned down an offer from Google that was three times his teaching salary (about $180,000, according to public records) so he could keep his post at the university. Instead, he took a part-time position at the Allen Institute for Artificial Intelligence, a Seattle lab backed by the Microsoft co-founder Paul Allen.

Many researchers retain their professorships when moving to the big companies — that’s Mr. Zettlemoyer’s plan while he works for Facebook — but they usually cut back on their academic work. At Facebook, academics typically spend 80 percent of their time at the company and 20 percent at their university.

Photo

Luke Zettlemoyer had turned down a position at Google that would have more than tripled his salary as a professor at the University of Washington. But he recently accepted a job with Facebook.

Credit
Kyle Johnson for The New York Times

Like the other internet giants, Facebook acknowledges the importance of the university system. But at the same time, the companies are eager to land top researchers.

In Pittsburgh, Facebook hired two professors from the Carnegie Mellon Robotics Institute, Abhinav Gupta and Jessica Hodgins, who specialized in computer vision technology.

The new Facebook lab will focus on robotics and “reinforcement learning,” a way for robots to learn tasks by trial and error. Siddhartha Srinivasa, a robotics professor at the University of Washington, said he was also approached by Facebook in recent months. It was not clear to him why the internet company was interested in robotics.

Andrew Moore, dean of computer science at Carnegie Mellon, did not respond to a request for comment. But over the past several months, he has been vocal about the movement of A.I. researchers toward the big internet companies. Google also operates an engineering office near Carnegie Mellon.

“What we’re seeing is not necessarily good for society, but it is rational behavior by these companies,” he said.

The two new Facebook labs are part of wider expansion for the company’s A.I. operation. In December, Facebook announced that it had hired another computer vision expert, Jitendra Malik, a professor at the University of California, Berkeley. He now oversees the lab at the company’s headquarters in Menlo Park, Calif.

Even with its deep pockets, Facebook faces fierce competition for talent. Mr. Allen recently gave the Allen Institute, which he created in 2013, an additional $125 million in funding. After losing Mr. Zettlemoyer to Facebook, the Allen Institute hired Noah Smith and Yejin Choi, two of his colleagues at the University of Washington.

Like Mr. Zettlemoyer, both specialize in natural language processing, and both say they received offers from multiple internet companies.

The nonprofit is paying Mr. Smith and Ms. Choi a small fraction of what they were offered to join the commercial sector, but the Allen Institute will allow them to spend half their time at the university and collaborate with a wide range of companies, said Oren Etzioni, who oversees the Allen Institute.

“The salary numbers are so large that even Paul Allen can’t match them,” Mr. Etzioni said. “But there are still some people who won’t go corporate.”

Others researchers believe that companies like Facebook still align with their academic goals. Nonetheless, Ed Lazowska, chairman of the computer science and engineering department at the University of Washington, said he was concerned that the large internet companies were luring too many of the university’s professors into the commercial sector.

Carnegie Mellon and the University of Washington, he said, are working on a set of recommendations for commercial companies meant to provide a way for universities and companies to share talent more equally. Mr. Lazowska added that every university should ensure that it did not become too close to one company.

“The university must be a Switzerland,” he said. “We want every company to collaborate with us and to feel like they have an equal opportunity to hire our students and work with our faculty.”

Continue reading the main story

YouTube Says Computers Are Catching Problem Videos


Figuring out how to remove unwanted videos — and balancing that with free speech — is a major challenge for the future of YouTube, said Eileen Donahoe, executive director at Stanford University’s Global Digital Policy Incubator.

“It’s basically free expression on one side and the quality of discourse that’s beneficial to society on the other side,” Ms. Donahoe said. “It’s a hard problem to solve.”

YouTube declined to disclose whether the number of videos it had removed had increased from the previous quarter or what percentage of its total uploads those 8.28 million videos represented. But the company said the takedowns represented “a fraction of a percent” of YouTube’s total views during the quarter.

Photo

Google said last year it would hire 10,000 people to address policy violations across its platforms. YouTube said on Monday that it had filled a majority of the jobs that had been allotted to it.

Credit
Roger Kisby for The New York Times

Betting on improvements in artificial intelligence is a common Silicon Valley approach to dealing with problematic content; Facebook has also said it is counting on A.I. tools to detect fake accounts and fake news on its platform. But critics have warned against depending too heavily on computers to replace human judgment.

It is not easy for a machine to tell the difference between, for example, a video of a real shooting and a scene from a movie. And some videos slip through the cracks, with embarrassing results. Last year, parents complained that violent or provocative videos were finding their way to YouTube Kids, an app that is supposed to contain only child-friendly content that has automatically been filtered from the main YouTube site.

YouTube has contended that the volume of videos uploaded to the site is too big of a challenge to rely only on human monitors.

Still, in December, Google said it was hiring 10,000 people in 2018 to address policy violations across its platforms. In a blog post on Monday, YouTube said it had filled the majority of the jobs that had been allotted to it, including specialists with expertise in violent extremism, counterterrorism and human rights, as well as expanding regional teams. It was not clear what YouTube’s final share of the total would be.

Still, YouTube said three-quarters of all videos flagged by computers had been removed before anyone had a chance to watch them.

The company’s machines can detect when a person tries to upload a video that has already been taken down and will prevent that video from reappearing on the site. And in some cases with videos containing nudity or misleading content, YouTube said its computer systems are adept enough to delete the video without requiring a human to review the decision.

The company said its machines are also getting better at spotting violent extremist videos, which tend to be harder to identify and have fairly small audiences.

At the start of 2017, before YouTube introduced so-called machine-learning technology to help computers identify videos associated with violent extremists, 8 percent of videos flagged and removed for that kind of content had fewer than 10 views. In the first quarter of 2018, the company said, more than half of the videos flagged and removed for violent extremism had fewer than 10 views.

Even so, users still play a meaningful role in identifying problematic content. The top three reasons users flagged videos during the quarter involved content they considered sexual, misleading or spam, and hateful or abusive.

YouTube said users had raised 30 million flags on roughly 9.3 million videos during the quarter. In total, 1.5 million videos were removed after first being flagged by users.

Continue reading the main story

A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit


“There is a mountain of demand and a trickle of supply,” said Chris Nicholson, the chief executive and founder of Skymind, a start-up working on A.I.

That raises significant issues for universities and governments. They also need A.I. expertise, both to teach the next generation of researchers and to put these technologies into practice in everything from the military to drug discovery. But they could never match the salaries being paid in the private sector.

In 2015, Elon Musk, the chief executive of the electric-car maker Tesla, and other well-known figures in the tech industry created OpenAI and moved it into offices just north of Silicon Valley in San Francisco. They recruited several researchers with experience at Google and Facebook, two of the companies leading an industrywide push into artificial intelligence.

In addition to salaries and signing bonuses, the internet giants typically compensate employees with sizable stock options — something that OpenAI does not do. But it has a recruiting message that appeals to idealists: It will share much of its work with the outside world, and it will consciously avoid creating technology that could be a danger to people.

“I turned down offers for multiple times the dollar amount I accepted at OpenAI,” Mr. Sutskever said. “Others did the same.” He said he expected salaries at OpenAI to increase as the organization pursued its “mission of ensuring powerful A.I. benefits all of humanity.”

OpenAI spent about $11 million in its first year, with more than $7 million going to salaries and other employee benefits. It employed 52 people in 2016.

Photo

An old video game used for training an autonomous system at OpenAI, a nonprofit lab in San Francisco.

Credit
Christie Hemm Klok for The New York Times

People who work at major tech companies or have entertained job offers from them have told The New York Times that A.I. specialists with little or no industry experience can make between $300,000 and $500,000 a year in salary and stock. Top names can receive compensation packages that extend into the millions.

“The amount of money was borderline crazy,” Wojciech Zaremba, a researcher who joined OpenAI after internships at Google and Facebook, told Wired. While he would not reveal exact numbers, Mr. Zaremba said big tech companies were offering him two or three times what he believed his real market value was.

At DeepMind, a London A.I. lab now owned by Google, costs for 400 employees totaled $138 million in 2016, according to the company’s annual financial filings in Britain. That translates to $345,000 per employee, including researchers and other staff.

Researchers like Mr. Sutskever specialize in what are called neural networks, complex algorithms that learn tasks by analyzing vast amounts of data. They are used in everything from digital assistants in smartphones to self-driving cars.

Some researchers may command higher pay because their names carry weight across the A.I. community and they can help recruit other researchers.

Mr. Sutskever was part of a three-researcher team at the University of Toronto that created key so-called computer vision technology. Mr. Goodfellow invented a technique that allows machines to create fake digital photos that are nearly indistinguishable from the real thing.

“When you hire a star, you are not just hiring a star,” Mr. Nicholson of the start-up Skymind said. “You are hiring everyone they attract. And you are paying for all the publicity they will attract.”

Other researchers at OpenAI, including Greg Brockman, who leads the lab alongside Mr. Sutskever, did not receive such high salaries during the lab’s first year.

In 2016, according to the tax forms, Mr. Brockman, who had served as chief technology officer at the financial technology start-up Stripe, made $175,000. As one of the founders of the organization, however, he most likely took a salary below market value. Two other researchers with more experience in the field — though still very young — made between $275,000 and $300,000 in salary alone in 2016, according to the forms.

Though the pool of available A.I. researchers is growing, it is not growing fast enough. “If anything, demand for that talent is growing faster than the supply of new researchers, because A.I. is moving from early adopters to wider use,” Mr. Nicholson said.

That means it can be hard for companies to hold on to their talent. Last year, after only 11 months at OpenAI, Mr. Goodfellow returned to Google. Mr. Abbeel and two other researchers left the lab to create a robotics start-up, Embodied Intelligence. (Mr. Abbeel has since signed back on as a part-time adviser to OpenAI.) And another researcher, Andrej Karpathy, left to become the head of A.I. at Tesla, which is also building autonomous driving technology.

In essence, Mr. Musk was poaching his own talent. Since then, he has stepped down from the OpenAI board, with the lab saying this would allow him to “eliminate a potential future conflict.”

Continue reading the main story

Trilobites: How Do You Count Endangered Species? Look to the Stars


But cameras made for daylight can miss animals or poachers moving through vegetation, and the devices don’t work at night. Infrared cameras can help: Dr. Wich had been using them for decades to study orangutans.

These cameras yield large amounts of footage that can’t be analyzed fast enough. So what do animals and stars have in common? They both emit heat. And much like stars, every species has a recognizable thermal footprint.

“They look like really bright, shining objects in the infrared footage,” said Dr. Burke. So the software used to find stars and galaxies in space can be used to seek out thermal footprints and the animals that produce them.

To build up a reference library of different animals in various environments, the team is working with a safari park and zoo to film and photograph animals. With these thermal images — and they’ll need thousands — they’ll be able to better calibrate algorithms to identify target species in ecosystems around the world.

Photo

Rhinos observed as part of the tests. The researchers found that, like stars, animals have a recognizable thermal footprint.

Credit
Endangered Wildlife Trust/LJMU

The experts started with cows and humans in England. On a sunny, summer day in 2015, the team flew their drones over a farm to see if their machine-learning algorithms could locate the animals in infrared footage.

For the most part, they could.

But accuracy was compromised when drones flew too high, cows huddled together, or roads and rocks heated up in the sun. In a later test, the machines occasionally mistook hot rocks for students pretending to be poachers hiding in the bush.

Last September, the scientists honed their tools in the first field test in South Africa. There, they found five Riverine rabbits in a relatively small area. These shy rodents are among the world’s most endangered mammals. Only a thousand have ever been spotted by people.

The tests helped the scientists calculate an optimal height to fly the drones. The team also learned that animals change shape in real time (rocks don’t) as drones fly over. And the researchers found that rain, humidity and other environmental, atmospheric and weather conditions can interfere with proper imaging.

The scientists are refining their system to account for these issues. And in two years, Dr. Burke said, they plan to have a fully automatic prototype ready for testing. Within five years, she hopes to sell systems at cost — today, just around $15,000.

In the meantime, these astro-ecologists are also working with search and rescue groups to help find people lost at sea or in fog. And starting in May, they will collaborate with conservation groups and other universities to look for orangutans and spider monkeys in the dense forests of Malaysia and Mexico, as well as for river dolphins in Brazil’s murky Amazon River.

Continue reading the main story

The Story of a Voice: HAL in ‘2001’ Wasn’t Always So Eerily Calm


Even when Kubrick was making the film, the director sensed HAL’s larger implications. He said in a 1969 interview with the author and critic Joseph Gelmis that one of the things he was trying to convey was “the reality of a world populated — as ours soon will be — by machine entities that have as much, or more, intelligence as human beings. We wanted to stimulate people to think what it would be like to share a planet with such creatures.”

So how was this particular creature created?

The “2001” historian David Larson said that “Kubrick came up with the final HAL voice very late in the process. It was determined during ‘2001’ planning that in the future the large majority of computer command and communication inputs would be via voice, rather than via typewriter.”

But artificial intelligence was decades from a convincing facsimile of a human voice — and who was to say how a computer should sound anyway?

To play HAL, Kubrick settled on Martin Balsam, who had won the best supporting actor Oscar for “A Thousand Clowns.” Perhaps there was a satisfying echo that appealed to Kubrick — both were from the Bronx and sounded like it. In August 1966, Balsam told a journalist: “I’m not actually seen in the picture at any time, but I sure create a lot of excitement projecting my voice through that machine. And I’m getting an Academy Award winner price for doing it, too.”

Adam Balsam, the actor’s son, told me that “Kubrick had him record it very realistically and humanly, complete with crying during the scene when HAL’s memory is being removed.”

Then the director changed his mind. “We had some difficulty deciding exactly what HAL should sound like, and Marty just sounded a little bit too colloquially American,” Kubrick said in the 1969 interview. Mr. Rain recalls Kubrick telling him, “I’m having trouble with what I’ve got in the can. Would you play the computer?”

Kubrick had heard Mr. Rain’s voice in the 1960 documentary “Universe,” a film he watched at least 95 times, according to the actor. “I think he’s perfect,” Kubrick wrote to a colleague in a letter preserved in the director’s archive. “The voice is neither patronizing, nor is it intimidating, nor is it pompous, overly dramatic or actorish. Despite this, it is interesting.”

Photo

Douglas Rain at the Stratford Festival in Canada in 1968. The year before, he recorded HAL’s voice for Stanley Kubrick.

Credit
Doug Griffin/Toronto Star, via Getty Images

In December 1967, Kubrick and Mr. Rain met at a recording studio at the MGM lot in Borehamwood, outside London.

The actor hadn’t seen a frame of the film, then still deep in postproduction. He met none of his co-stars, not even Keir Dullea, who played the astronaut David Bowman, HAL’s colleague turned nemesis. The cast members had long since completed their work, getting HAL’s lines fed to them by a range of people, including the actress Stefanie Powers. Mr. Rain hadn’t even been hired to play HAL, but to provide narration. Kubrick finally decided against using narration, opting for the ambiguity that was enraging to some viewers, transcendent to others.

It’s not a session Mr. Rain remembers fondly: “If you could have been a ghost at the recording you would have thought it was a load of rubbish.”

Kubrick was attracted to Mr. Rain for the role partly because the actor “had the kind of bland mid-Atlantic accent we felt was right for the part,” he said in the 1969 interview with Mr. Gelmis. But Mr. Rain’s accent isn’t mid-Atlantic at all; it’s Standard Canadian English.

As the University of Toronto linguistics professor Jack Chambers explained: “You have to have a computer that sounds like he’s from nowhere, or, rather, from no specific place. Standard Canadian English sounds ‘normal’ — that’s why Canadians are well received in the United States as anchormen and reporters, because the vowels don’t give away the region they come from.”

Mr. Rain had played an astonishing range of characters in almost 80 productions at the Stratford Festival in Ontario over 45 years, understudying Alec Guinness in “Richard III” in 1953 and going on to play Macbeth, King Lear and Humpty Dumpty. Sexy, intimidating, folksy, sly or persuasive, he could deliver whatever a role needed.

Mr. Rain had to quickly fathom and flesh out HAL, recording all of his lines in 10 hours over two days. Kubrick sat “three feet away, explaining the scenes to me and reading all the parts.”

Kubrick, according to the transcript of the session in his archive at the University of the Arts London, gave Mr. Rain only a few notes of direction, including:

— “Sound a little more like it’s a peculiar request.”

— “A little more concerned.”

— “Just try it closer and more depressed.”

Though HAL has ice water in his digital veins, he exudes a dry wit and superciliousness that makes me wonder why someone would deliberately program a computer to talk this way. Maybe we should worry about A.I.

When HAL says, “I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal,” Mr. Rain somehow manages to sound both sincere and not reassuring. And his delivery of the line “I think you know what the problem is just as well as I do” has the sarcastic drip of a drawing-room melodrama and also carries the disinterested vibe of a polite sociopath.

Kubrick had Mr. Rain sing the 1892 love song “Daisy Bell” (“I’m half crazy, all for the love of you”) almost 50 times, in uneven tempos, in monotone, at different pitches and even just by humming it. In the end, he used the very first take. Sung as HAL’s brain is being disconnected, it’s from his early programming days, his computer childhood. It brings to an end the most affecting scene in the entire film.

Scott Brave said the moment “is so powerful that you feel very uncomfortable; all of sudden HAL feels incredibly close to being alive and being human. You start to empathize with that experience, and you are responding to the death of a machine.”

For a character that’s been endlessly caricatured — in “The Simpsons,” “South Park,” television commercials — HAL has inspired a surprisingly rich range of adjectives over the years. He and his voice have been described as aloof, eerily neutral, silky, wheedling, controlled, baleful, unisex, droll, soft, conversational, dreamy, supremely calm and rational. He’s discursive, suave, inhumanly cool, confident, superior, deadpan, sinister, patronizing and asexual.

Anthony Hopkins has said it influenced his performance as the serial killer Hannibal Lecter in “The Silence of the Lambs.” Douglas Rain himself has never seen “2001: A Space Odyssey.” For the retired actor who spent decades at the Stratford Festival and turns 90 in May, the performance was simply a job.

A.I. voice synthesis can’t yet deliver a performance as compelling as his HAL, but it is becoming more … human. The HAL era is almost over: Soon, an A.I. voice will be able to sound like whoever you want it to. In Canada, even Alexa has a Canadian accent.

Continue reading the main story

The Story of a Voice: HAL in ‘2001’ Wasn’t Always So Eerily Calm


Even when Kubrick was making the film, the director sensed HAL’s larger implications. He said in a 1969 interview with the author and critic Joseph Gelmis that one of the things he was trying to convey was “the reality of a world populated — as ours soon will be — by machine entities that have as much, or more, intelligence as human beings. We wanted to stimulate people to think what it would be like to share a planet with such creatures.”

So how was this particular creature created?

The “2001” historian David Larson said that “Kubrick came up with the final HAL voice very late in the process. It was determined during ‘2001’ planning that in the future the large majority of computer command and communication inputs would be via voice, rather than via typewriter.”

But artificial intelligence was decades from a convincing facsimile of a human voice — and who was to say how a computer should sound anyway?

To play HAL, Kubrick settled on Martin Balsam, who had won the best supporting actor Oscar for “A Thousand Clowns.” Perhaps there was a satisfying echo that appealed to Kubrick — both were from the Bronx and sounded like it. In August 1966, Balsam told a journalist: “I’m not actually seen in the picture at any time, but I sure create a lot of excitement projecting my voice through that machine. And I’m getting an Academy Award winner price for doing it, too.”

Adam Balsam, the actor’s son, told me that “Kubrick had him record it very realistically and humanly, complete with crying during the scene when HAL’s memory is being removed.”

Then the director changed his mind. “We had some difficulty deciding exactly what HAL should sound like, and Marty just sounded a little bit too colloquially American,” Kubrick said in the 1969 interview. Mr. Rain recalls Kubrick telling him, “I’m having trouble with what I’ve got in the can. Would you play the computer?”

Kubrick had heard Mr. Rain’s voice in the 1960 documentary “Universe,” a film he watched at least 95 times, according to the actor. “I think he’s perfect,” Kubrick wrote to a colleague in a letter preserved in the director’s archive. “The voice is neither patronizing, nor is it intimidating, nor is it pompous, overly dramatic or actorish. Despite this, it is interesting.”

Photo

Douglas Rain at the Stratford Festival in Canada in 1968. The year before, he recorded HAL’s voice for Stanley Kubrick.

Credit
Doug Griffin/Toronto Star, via Getty Images

In December 1967, Kubrick and Mr. Rain met at a recording studio at the MGM lot in Borehamwood, outside London.

The actor hadn’t seen a frame of the film, then still deep in postproduction. He met none of his co-stars, not even Keir Dullea, who played the astronaut David Bowman, HAL’s colleague turned nemesis. The cast members had long since completed their work, getting HAL’s lines fed to them by a range of people, including the actress Stefanie Powers. Mr. Rain hadn’t even been hired to play HAL, but to provide narration. Kubrick finally decided against using narration, opting for the ambiguity that was enraging to some viewers, transcendent to others.

It’s not a session Mr. Rain remembers fondly: “If you could have been a ghost at the recording you would have thought it was a load of rubbish.”

Kubrick was attracted to Mr. Rain for the role partly because the actor “had the kind of bland mid-Atlantic accent we felt was right for the part,” he said in the 1969 interview with Mr. Gelmis. But Mr. Rain’s accent isn’t mid-Atlantic at all; it’s Standard Canadian English.

As the University of Toronto linguistics professor Jack Chambers explained: “You have to have a computer that sounds like he’s from nowhere, or, rather, from no specific place. Standard Canadian English sounds ‘normal’ — that’s why Canadians are well received in the United States as anchormen and reporters, because the vowels don’t give away the region they come from.”

Mr. Rain had played an astonishing range of characters in almost 80 productions at the Stratford Festival in Ontario over 45 years, understudying Alec Guinness in “Richard III” in 1953 and going on to play Macbeth, King Lear and Humpty Dumpty. Sexy, intimidating, folksy, sly or persuasive, he could deliver whatever a role needed.

Mr. Rain had to quickly fathom and flesh out HAL, recording all of his lines in 10 hours over two days. Kubrick sat “three feet away, explaining the scenes to me and reading all the parts.”

Kubrick, according to the transcript of the session in his archive at the University of the Arts London, gave Mr. Rain only a few notes of direction, including:

— “Sound a little more like it’s a peculiar request.”

— “A little more concerned.”

— “Just try it closer and more depressed.”

Though HAL has ice water in his digital veins, he exudes a dry wit and superciliousness that makes me wonder why someone would deliberately program a computer to talk this way. Maybe we should worry about A.I.

When HAL says, “I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal,” Mr. Rain somehow manages to sound both sincere and not reassuring. And his delivery of the line “I think you know what the problem is just as well as I do” has the sarcastic drip of a drawing-room melodrama and also carries the disinterested vibe of a polite sociopath.

Kubrick had Mr. Rain sing the 1892 love song “Daisy Bell” (“I’m half crazy, all for the love of you”) almost 50 times, in uneven tempos, in monotone, at different pitches and even just by humming it. In the end, he used the very first take. Sung as HAL’s brain is being disconnected, it’s from his early programming days, his computer childhood. It brings to an end the most affecting scene in the entire film.

Scott Brave said the moment “is so powerful that you feel very uncomfortable; all of sudden HAL feels incredibly close to being alive and being human. You start to empathize with that experience, and you are responding to the death of a machine.”

For a character that’s been endlessly caricatured — in “The Simpsons,” “South Park,” television commercials — HAL has inspired a surprisingly rich range of adjectives over the years. He and his voice have been described as aloof, eerily neutral, silky, wheedling, controlled, baleful, unisex, droll, soft, conversational, dreamy, supremely calm and rational. He’s discursive, suave, inhumanly cool, confident, superior, deadpan, sinister, patronizing and asexual.

Anthony Hopkins has said it influenced his performance as the serial killer Hannibal Lecter in “The Silence of the Lambs.” Douglas Rain himself has never seen “2001: A Space Odyssey.” For the retired actor who spent decades at the Stratford Festival and turns 90 in May, the performance was simply a job.

A.I. voice synthesis can’t yet deliver a performance as compelling as his HAL, but it is becoming more … human. The HAL era is almost over: Soon, an A.I. voice will be able to sound like whoever you want it to. In Canada, even Alexa has a Canadian accent.

Continue reading the main story