Tech We’re Using: Taking Visual Journalism Into the Sky With Drones


Drones have democratized aerial imagery, as it no longer costs thousands of dollars to rent a helicopter or a plane to make images from above. I’m able to take a drone along with me to assignments in remote locations.

The first drone images I made were on a trip to Greenland’s ice sheet, where I captured images of a meltwater river flowing across the top of the ice. In Llapallapani, Bolivia, I used a drone to show that the second-largest lake in Bolivia had dried up, leaving boats stranded in the sand and a fishing community having to reinvent itself. More recently, I was able to get an aerial angle of the giant moai statues on Easter Island showing their proximity to an eroding coastline, which would not have been possible any other way.

What are the main concerns about using drones?

The main concern with drones is safety. It’s imperative that people understand the laws in their country and in any country where they are considering using a drone. This is one of the most important parts of my job — applying for permission from international governments to safely use a drone in their airspace.

Beyond the official legal approval, which varies from country to country, many times I’m bringing a drone into a small community and I want to address potential ethical concerns. For example, in Bolivia, our journalists made multiple trips to a remote village to familiarize the community with the idea of what a drone is, and to get permission from the village leaders before bringing this new technology into their community.

Photo

Mr. Haner used a drone to show the effect of climate change on the second-largest lake in Bolivia.

Credit
Josh Haner/The New York Times

There will always be people who abuse new technologies, but I believe that, as a community, journalists are doing a good job self-regulating. There are social media groups where we discuss issues and bring attention to good uses of the technology as well as potential concerns. Professional organizations like the National Press Photographers Association and the Poynter Institute have also come up with an excellent Drone Journalism Code of Ethics.

Drones may seem like toys, but if they are used without proper training and safeguards, they can very easily turn into a threat to air safety.

In addition to safety, people’s expectation of privacy is something we must consider. It is best to talk with anyone whose privacy you have the possibility of violating, before you fly and to think about what you are comfortable doing with your camera on the ground and extending that into the sky. Just because you can peer into someone’s backyard doesn’t mean you should.

Outside work, what tech product are you currently obsessed with?

My wife and I recently purchased the Frame TV from Samsung. When I first saw it, I was fooled into thinking it was a framed photograph surrounded by a mat.

Photo

A Frame TV made by Samsung blends in on a wall of photographs.

Credit
Andrew Burton for The New York Times

We now have it mounted flush on our wall, part of a gallery wall in our living room alongside art that we’ve collected over the years. When it’s not in use, it blends into the environment. Then there’s the “wow” moment when our friends ask where the TV is and we turn it on, and magically a framed photo turns into a live Warriors basketball game.

Our home and our space are very important to us, and this has allowed us to never feel like the TV is the center of attention.

I also love backpacking, sometimes alone and often to remote areas that in the past would make my family worried. Now, when I get to camp, I press a button on the DeLorme inReach Explorer Satellite Messenger to tell my wife and family that I’m O.K. In addition, I can request the weather forecast for my location. When I’m 20 miles into the backcountry and it begins snowing, it is invaluable to be able to find out if the weather is going to improve or if I need to leave so I can get home safely before they close the roads. It’s an added safety net that lets me be more spontaneous when exploring.

Technologists predict that text-based storytelling will soon be passé, and that videos and photos will be more important in journalism. What do you think?

Photo

The DeLorme inReach Explorer Satellite Messenger is a safety net that Mr. Haner takes along when he goes backpacking.

Credit
Andrew Burton for The New York Times

As a society, we are more visually literate than ever before with the omnipresence of cellphone cameras and social media. Everyone is a photographer, and everyone is documenting his or her life in a way unlike ever before. In addition, Instagram and Snapchat have made us expect an accelerated publishing speed and a deprioritization of text.

However, just because we are moving toward a more visual lexicon doesn’t mean that all imagery is created equally. The right way to succeed in journalism is not for something to be more visual — it’s the quality of the visuals that matters. Since our audience is bombarded with imagery through advertising, social media, texting, emoji, virtual reality and artificial reality, there will be a lot of bad journalism produced in the name of these technologies as publishers make bets on catering to a more visual audience. The journalist behind the camera is still the most important part of this discussion. His or her eye is what will make or break these visually driven projects.

We still must find smart ways to push visual journalism further and to make sure our readers know the integrity and ethics behind the creation of our photographs. Strong visual content doesn’t have to come at the expense of prose, and I think that some of our best stories at The Times are collaborations between writers and visual journalists. That said, I do think that many photographers and video journalists are now taking on a greater ownership of our storytelling than ever before.

Continue reading the main story

A Beginner’s Guide to Taking Great Video on Your Phone


Composition: To create compelling video, compose the elements in a scene or sequence deliberately. Use your phone’s LCD the way a fine-art painter might arrange forms, colors, lines and textures on canvas. (For more on composition, visit Kyle Cassidy’s article on Videomaker.com, which offers a wonderful introduction to composition and compositional devices, like the rule of thirds, as well as valuable tips, such as focusing on people’s eyes in your video.)

Photo


Credit
Brittany Greeson for The New York Times

Lighting: Light not only defines your subjects but also sets the mood or evokes emotion. Experiment with light and be aware of where your main light source is. For instance, noon sunlight on a cloudless day creates unflattering shadows on your subject’s face, while an overcast or cloudy day produces a softer, more pleasant-looking light. And remember what the legendary film director Martin Scorsese once noted: “Light is at the core of who we are and how we understand ourselves.”

Point of view: Ask yourself “Where am I pointing my camera lens and from what angle?” Consider point of view figuratively, as well: “How will the video’s point of view help me tell the story?” Some videos are like selfies and use a very subjective point of view to connect viewers to the story. For other videos you might want a more detached, less personal point of view. And when shooting small children or babies, get right down on the floor to shoot.

A video can resonate for reasons other than exquisite technique. The subject might be funny, or the story simply thrilling, sad, or even chaotic. Sometimes, a powerful video, though technically flawed, still draws us in by other means. Two film sequences come to my mind that illustrate this point.

The first, the apology scene from “The Blair Witch Project,” presents a visually awkward composition, in which the subject’s face is dramatically cropped. Also, the lighting and audio are lousy. Yet, the monologue, a horror-film soliloquy of sorts, conveys intensity, mystery and a baroque quality. You can almost feel the presence of a dark force outside the visual frame.

In the second sequence, the “I just wanna go the distance” sequence from the movie “Rocky” (2:00 in the video clip), the video subtly elevates an ordinary moment of doubt. It’s an exceptionally quiet moment, where Sylvester Stallone, as Rocky, lays down next to Talia Shire, as Adrian, telling her he can’t beat the champ. For nearly two minutes, the camera slowly pans in as the fighter utters his thoughts. What transfixes us is primarily the audio, since there’s little action. Yet it’s visual, too. I can’t help thinking of it as an updated version of the intensity and pathos you see in the ancient Greek sculpture, “Dying Gaul.”

So, good video obviously operates on a very visual level, but it can be driven in nonvisual ways, too. Keep your eyes open for such opportunities.

Start with the right settings

Before taking video on your phone, set it up properly. One important setting is video resolution, which refers to how large your movie will be. Two common resolutions are 1080 HD and 4K, which is the larger of the two.

Next, check the frame rate, which sets how many individual frames per second (fps) your video records. Common settings are 30 fps, 60 fps, and less commonly, 24 fps. The higher the number, the smoother-looking video you’ll produce. Most video is shot at 30 fps or 29.97 fps (in the United States), although 60 fps will show smoother, less jittery video when depicting action. But some videographers, like Mr. Nachtrieb, prefer filming in 24 fps, which mimics the frame rate used in cinema films.

Each of these two settings affects some visual or audio component of your project. They also can determine the final file size of the video. For instance, a five-second video shot at 4K-resolution will be roughly four times the size of the same segment shot in 1080 HD resolution. “When it comes to resolution,” says Mr. Nachtrieb, “it’s always going to be a compromise between your storage capacity on your phone and the quality resolution you want. I try to shoot 4K whenever possible.”

An Easy, but Important Tip: Clean Your Lens Mr. Nachtrieb recalls how he and a friend were shooting the same subject one day, but his friend’s lens was dirty, which produced blurry video. “Make sure your lens is clear. If it’s not, carefully clean it with a microfiber cloth.”

Photo


Credit
Alamy

How to improve your video quality

Shooting video on a phone isn’t the most intuitive experience. That’s because phones were designed as multipurpose devices, which also means they lack some important features, like a handgrip or optical zoom, which gets you closer to your subject without degrading image quality. (Instead, phones mostly use digital zoom, which often degrades image quality. So, avoid zooming in digitally. Instead, “zoom with your feet” or simply walk closer to your subject, if you can.) Here are several tips for getting better results.

1. Orientation: Be sure to orient your phone horizontally. “When I’m watching the news and there’s footage from a bystander that’s in portrait mode,” says Mr. Nachtrieb, “that’s an immediate signal that it’s an amateur video.” He says that while Instagram and Snapchat seem to be “aiding and abetting” users to create more portrait- or vertically-oriented footage, it’s best to avoid it.

2 Avoid Back Lighting: “Avoid having a window or light source behind your subject, since he or she will look silhouetted,” says Mr. Nachtrieb. Instead, have the light source more to the side of you or behind you.

3. Use both hands: “Always have two hands on the phone,” says Mr. Nachtrieb. “It may seem rudimentary, but it makes a big difference. Phone lenses generally have optical image stabilization built in, so they’re pretty stable already. But using two hands produces even steadier footage.” It also avoids what he calls the Jell-O effect. “If you’re moving the camera around too quickly, it can have a wavy quality to it.” Using two hands lessens the chance of creating this effect.

4. Lock Focus and Exposure: Mr. Nachtrieb suggests tapping on your phone’s LCD (on the point you want to focus on) which will lock focus on Google Android devices, or holding your finger in place, which locks focus on the Apple iPhone.“In low light, your phone’s camera will hunt for focus.” That makes it look less professional. Most phones let you also lock or manually adjust the exposure, too.

5. Improve Your Audio, Too: Most video pros say good quality audio is essential for powerful video. The good news is that the microphones on smartphones have improved in recent years. What’s more is that audio accessories, such as Bluetooth microphones, can make the audio in your video projects sound outstanding (which we’ll get to in a moment).

Here are two audio tricks: Borrow a second phone, start recording audio, and place the phone in your subject’s pocket. “Then, shoot video on your phone from far away,” says Mr. Nachtrieb. “You can always sync up the audio tracks later in video editing.” And when interviewing subjects, don’t interrupt their replies, says Mr. Nachtrieb.

6. Try Slow Mo and Time Lapse Effects: Many smartphones come with some powerful video features, including modes that appear to slow down or speed up time, which are more commonly known as slow motion and time lapse. The former captures video at an accelerated frame rate; When played back at normal speed, action in the video appears much slower than real time. With time lapse, a lower frame rate is used. When it’s played back at normal speed, action moves much faster than in real time. Both can produce compelling video.

Add accessories, or step up to a stand-alone camera

If you’re not happy with the hardware features on your phone, there are accessories to expand its capability and, in some cases, the quality of your video, particularly if you’re interested in Vlogging.

Here are a few products to consider, courtesy of our colleagues at Wirecutter, The New York Times Company’s product review site:

Accessory kit lenses:

Wirecutter testers like the Moment Wide-angle lens, $100. The lens attaches over a phone’s camera lens to give you a wider shot without drastically degrading image quality.

■ USB Mics: The Shure MV5, $99, is a great microphone for use with a smartphone.

■ Tripod: The Joby GorillaPod 1K Kit, $35, keeps your phone steady when shooting in low light or time-lapse.

A stand-alone camera can really improve your video. When you’re ready to shoot with something other than your phone, here are some models that earned top ratings from Wirecutter:

Interchangeable-Lens Cameras:

The Sony a5100 is the best entry-level mirrorless camera, $500, and the Canon EOS Rebel T5i is the best entry-level DSLR. Mirrorless cameras blend portability with powerful picture-taking sensors, while DSLRs are unmatched when it comes to photo quality — but are larger, bulkier, and more expensive. Both types allow you to switch lenses depending on what — or where — you’re taking photos.

■ Advanced Bridge Point-and-Shoot: For something between a mirrorless and a portable point and shoot, check out the Sony RX100 Mark IV, $900.

■ Waterproof-and-Rugged Camera: Want something that can travel with you and take a few bumps and drops? Try the Olympus TG-5, $420.

Continue reading the main story

It’s the Latest in Conservation Tech. And It Wants to Suck Your Blood.


“That this bloodsucking worm might suddenly advance conservation efforts is something few would have predicted,” he said.

Terrestrial leeches are found in humid regions stretching from Madagascar to southern Asia to a number of Pacific islands. Some 70 species have been described, with many more likely awaiting discovery.

They’re a diverse bunch: some are drab brown, others strikingly patterned in greens, reds and blues. Some crawl across the forest floor in search of a meal, while others occupy leafy perches and leap onto unsuspecting hosts.

Photo

Haemadipsa zeylanica, a leech, in Malaysia after a meal. Leeches can swell to 10 times their body weight after feeding.

Credit
Bernard Dupont

They all share a taste for blood. The tiny vampires may swell to 10 times their body weight after feeding, transforming from agile, threadlike worms into engorged blood sausages. Remnants of a leech’s last gluttonous meal may remain in its body for months — a boon for researchers curious to see what it previously fed on.

The idea of using leech blood meals as an identification tool may have been inspired by a criminal case in Tasmania in 2009. Investigators recovered DNA from a blood-filled leech to link a suspect to a robbery.

Several years later, researchers published the first field study showing that the method worked to identify wildlife, too. While encouraging, the initial study was based on a sample of just 25 leeches caught in Vietnam.

Eager to see if the method might be applicable on a much broader scale, Dr. Tessler and his colleagues set out to conduct an investigation in Bangladesh, China and Cambodia.

The first step — collection — was simple, said Sarah Weiskopf, a biologist at the United States Geological Survey and co-author of the new papers: “You just get to your spot in the forest and look around for things crawling toward you.”

Of the thousands of leeches Ms. Weiskopf, Dr. Tessler and their colleagues captured, 750 were selected for genetic analysis. The researchers cut out the parasites’ digestive tracts and filtered them to extract DNA.

They used primers — short, known sequences of genetic material — to separate mammal from leech DNA, and then they sequenced the results and compared them to a genetic database of known species.

Leeches, the researchers reported in the journal Systematics and Biodiversity, are far from picky eaters: the parasites had fed on 26 different mammal species, plus three birds.

Nguyen Quang Hoa Anh, a project manager for the World Wildlife Fund in Vietnam who was not involved in the research, has been using leeches for several years to survey wildlife in remote jungles near the Lao border. He confirmed their utility as a monitoring tool, especially when paired with other methods.

“We need as much information as we can possibly get if we are going to identify endangered species and head off the extinction crisis,” he said.

Researchers have traditionally made such identifications by catching animals, collecting hair or dung samples, or setting up camera traps. Capture stresses out and sometimes injures animal subjects, however, and hair and dung can be difficult to find.

Camera traps are the current gold standard in tropical rain forests, but they tend to require significant time and expense.

In the second study, published in The Journal of Applied Ecology and led by Ms. Weiskopf, then at the University of Delaware, the researchers aimed to compare camera traps to leech collection. The researchers set up 30 camera trap sites in four forest reserves in Bangladesh and captured 200 leeches in the same spots.

While the leeches produced evidence of 12 species of mammals (including a small rodent, the Tanezumi rat, that the camera traps missed), the cameras documented 26 species. But Ms. Weiskopf pointed out that the cameras were rolling for nearly nine months, while the leeches were collected in just four days.

Simply collecting a few more leeches, Ms. Weiskopf said, could potentially put the method on par with camera trapping — especially when time and money are taken into account. The leech work cost just $3,770, while the camera traps came in at $24,800.

Given these advantages, Ms. Weiskopf said, “leeches could really complement some of our already existing biodiversity-monitoring methods, and move forward some existing biodiversity conservation efforts.”

Future studies, Dr. Tessler added, could be made even more efficient by blending hundreds of leeches into a slurry and genetically sequencing all of their blood meals in one go.

“I don’t know how big this will become, but I think leeches have quite a bit of potential,” he said. “This is just a fascinating method.”

Continue reading the main story

A Tiny Elevator, a Haunting Reminder in Brooklyn


And the camera in the 15-square-foot elevator has become a daily reminder for parents to not leave their children unsupervised, even inside the building.

“They’re never by themselves,” Evita Worley, 55, said of her three grandchildren. “I think neighbors are now more secure with their children.”

On a recent visit to the building, a few tenants trickled in and out of the elevator with grocery bags from the mini-market on Stanley Avenue. In the lobby, the frame around the elevator door was plastered with fliers: safety guidelines, city youth employment programs, an outdated winter storm warning.

Opposite the elevator, above the 38 mailboxes, hung a flyer with a picture of P.J. wearing a graduation cap and gown. “Coming Soon,” it read. “Prince Joshua Avitto Community Center.” The two-story building, with a gymnasium and a dance studio named after Mikayla, is scheduled to open across the street in May.

Photo

A memorial for P.J. in the courtyard of the Boulevard Houses.

Credit
Demetrius Freeman for The New York Times

That stretch of Schenk Avenue was renamed Prince Joshua Avitto Way in 2015. Next to the street sign bearing P.J.’s name, a community garden known as the Garden of Peace was built in his honor. And all around the neighborhood, shops and restaurants have pictures of P.J. on their walls.

“This building is his memorial,” said Ms. Worley, who has lived 10 years in the Boulevard Houses. “His spirit will always be here.”

P.J.’s family moved out a month after the attack. But on Palm Sunday, while the trial in her son’s killing was underway, Aricka McClinton paid a visit to the neighborhood for Mass. Outside Building 11, she was joined by P.J.’s godmother, Anabelle Alston, and aunt, Sherri Avitto.

“No matter what, everything comes back to this building,” said Ms. Avitto, 52, who lives down the street.

Continue reading the main story

Trilobites: How Do You Count Endangered Species? Look to the Stars


But cameras made for daylight can miss animals or poachers moving through vegetation, and the devices don’t work at night. Infrared cameras can help: Dr. Wich had been using them for decades to study orangutans.

These cameras yield large amounts of footage that can’t be analyzed fast enough. So what do animals and stars have in common? They both emit heat. And much like stars, every species has a recognizable thermal footprint.

“They look like really bright, shining objects in the infrared footage,” said Dr. Burke. So the software used to find stars and galaxies in space can be used to seek out thermal footprints and the animals that produce them.

To build up a reference library of different animals in various environments, the team is working with a safari park and zoo to film and photograph animals. With these thermal images — and they’ll need thousands — they’ll be able to better calibrate algorithms to identify target species in ecosystems around the world.

Photo

Rhinos observed as part of the tests. The researchers found that, like stars, animals have a recognizable thermal footprint.

Credit
Endangered Wildlife Trust/LJMU

The experts started with cows and humans in England. On a sunny, summer day in 2015, the team flew their drones over a farm to see if their machine-learning algorithms could locate the animals in infrared footage.

For the most part, they could.

But accuracy was compromised when drones flew too high, cows huddled together, or roads and rocks heated up in the sun. In a later test, the machines occasionally mistook hot rocks for students pretending to be poachers hiding in the bush.

Last September, the scientists honed their tools in the first field test in South Africa. There, they found five Riverine rabbits in a relatively small area. These shy rodents are among the world’s most endangered mammals. Only a thousand have ever been spotted by people.

The tests helped the scientists calculate an optimal height to fly the drones. The team also learned that animals change shape in real time (rocks don’t) as drones fly over. And the researchers found that rain, humidity and other environmental, atmospheric and weather conditions can interfere with proper imaging.

The scientists are refining their system to account for these issues. And in two years, Dr. Burke said, they plan to have a fully automatic prototype ready for testing. Within five years, she hopes to sell systems at cost — today, just around $15,000.

In the meantime, these astro-ecologists are also working with search and rescue groups to help find people lost at sea or in fog. And starting in May, they will collaborate with conservation groups and other universities to look for orangutans and spider monkeys in the dense forests of Malaysia and Mexico, as well as for river dolphins in Brazil’s murky Amazon River.

Continue reading the main story

How Driverless Cars See the World Around Them


Even in situations where lidar works well, these companies want backup systems in place. So most driverless cars are also equipped with a variety of other sensors.

Like what?

Cameras, radar and global positioning system antennas, the kind of GPS hardware that tells your smartphone where it is.

With the GPS antennas, companies like Uber and Waymo are providing cars with even more information about where they are in the world. With cameras and radar sensors, they can gather additional information about nearby pedestrians, bicyclists, cars and other objects.

Photo

A rooftop lidar system, which measures distances using pulses of light, on a Ford test car.

Credit
Ford

Cameras also provide a way to recognize traffic lights, street signs, road markings and other signals that cars need to take into account.

How do the cars use all that information?

That is the hard part. Sifting through all that data and responding to it require a system of immense complexity.

In some cases, engineers will write specific rules that define how a car should respond in a particular situation. If a Waymo car detects a red light, for example, it is programmed to stop.

But a team of engineers could never write rules for every situation a car could encounter. So companies like Waymo and Uber are beginning to rely on “machine learning” systems that can learn behavior by analyzing vast amounts of data describing the country’s roadways.

Waymo now uses a system that learns to identify pedestrians by analyzing thousands of photos that contain people walking or running across or near roads.

Is that the kind of thing that broke down in Tempe?

It is unclear what happened in Tempe. But these cars are designed so that if one system fails, another will kick in. In all likelihood, the Uber cars used lidar and radar as well as cameras to detect and respond to nearby objects, including pedestrians.

Self-driving cars can have difficulty duplicating the subtle, nonverbal communication that goes on between pedestrians and drivers. An autonomous vehicle, after all, can’t make eye contact with someone at a crosswalk.

“It is still important to realize how hard these problems are,” said Ken Goldberg, a professor at the University of California, Berkeley, who specializes in robotics. “That is the thing that many don’t understand, just because these are things humans do so effortlessly.”

The crash occurred at night. Is that a problem?

These cars are designed to work at night, when some sensors can operate just as well in the daytime. Some companies even argue that it is easier for these cars to operate at night.

But there are conditions that these cars are still struggling to master. They do not work as well in heavy precipitation. They can have trouble in tunnels and on bridges. And they may have difficulty dealing with heavy traffic.

Continue reading the main story