20 Questions with Andrew Lazarow—Creativity and Ethics in a Touchless Time

Read Time: 15 minutes

SEGD asks 20 questions of Andrew Lazarow, an award-winning interactive designer at ESI Design, an NBBJ studio (New York) who is passionate about creative problem solving, ethical use of technology, and the arts—and gives us the scoop on new interaction options, like touchless haptics. 

Andrew Lazarow is an award-winning interactive designer who develops the robust visual and audio systems that support the interactive experiences designed by ESI Design, an NBBJ studio. Lazarow has over 11 years of interaction design experience, focusing on encouraging audiences to be more present and inspiring wonder. His areas of expertise include projection mapping, motion capture, video systems and interactive lighting control.

For the past five years, Lazarow has been a Resident Researcher and Adjunct Professor at NYU’s Interactive Telecommunications Program. As a designer, he focuses on interactive video and projections designs for retail and public spaces as well as for theater, dance, and opera.

We got to know more about him, his work, and quizzed him on touch and touchless tech at the end of May 2020, over the phone.

 

+++

 

How did you get into interactive design and creative technology as a career?

I began working in theater thinking I’d be an actor, then working as a director, and then finding multimedia design, which in theater is typically called video or projection design. I did that for a few years, went to graduate school and worked professionally in that world, designing for operas and plays and musicals on Broadway.

 

Well, that’s cool.

It was a lot of fun and a lot of exciting nights, but long hours. And, that background instilled a kind of subtleness, patience and storytelling in my design work.

 

Why did you leave the theater?

It was exhausting.

I was actually opening a Broadway musical when I first started talking with ESI [ESI Design, an NBBJ studio] and when I did join the team, the show was in previews. I was running to ESI in the morning, then the theater in the afternoon to rehearse new scenes to go in the show that night, watch the show that night, give notes to my team—on repeat until I went fulltime.

I love theater, and still continue to do some work in that space. There are a lot of differences and similarities between what I’m doing now and that work; but, getting back to a normal schedule after you’ve been working six days and nights a week was lovely.

 

And, why ESI Design, an NBBJ studio?

I felt like ESI was a really good match for me on many levels; a fair amount of my theatrical work had some political undertones to it, so the strong, ethical backbone of ESI was something I was drawn to.

This team never starts with what the coolest new toy is—but always with finding out who the client and the visitor or the customer are. Who is the person we’re designing for? What is their experience and what does that call for? And then from there, what to put where, and what shape it takes and how overt versus subtle it is.

At the core of all of our work is trying to bring people together, rather than deeper into our devices. If we ask you to use your cell phones, the whole point is still to inspire you to connect on a person-to-person level. What we do asks you to be present in a space with other people.

 

What sorts of projects or verticals do you work in and in what roles?

It’s just pretty broad distribution. We do a mix of museums, cultural projects, workplaces, corporate centers, and retail. Recently, we’ve completed work for The Statue of Liberty Museum, Warner Media’s new offices in Hudson Yards and Beacon Capital Partners.

I am both a designer and a technologist, and on certain projects, the creative director. For example, I’m the creative director for an update to one of the train stations in Boston, in the Back Bay neighborhood, where we’re designing a public installation, and I’m overseeing the physical design, the systems design, the actual media content, motion studies and animation, and then other projects are more working with our physical design team, or on the purely technologist side.

 

What do your colleagues come to you specifically for?

On a technical level, things related to projected images, interfaces and optical illusions are my forte. Actually, I’ve taught at NYU for the last six years, and one of my courses is specifically on optical illusions.

 

Can you tell us more?

It’s at NYU ITP, which is a graduate program in their art school that’s focused on new technology. The class is technically called “Nothing: Creating Illusions,” because the West for a long time, even after it knew about the number zero’s existence, thanks to other cultures—the ancient Sumerians, Mayans, and Chinese, to name a few—ignored it. It was mainly the Catholic church rejecting the concept, because if you can accept that there is zero, then there is the idea of nothing or absence that could make you question your faith to a degree that they didn’t want to have to deal with.

When zero was finally embraced by the West, it gave rise to linear perspective and the illusion of depth in painting, then optical illusions and sleight of hand and magic, and effects like Pepper’s Ghost, which are still used for what we call holograms today. The second half of the course is looking at the notion of underrepresented stories, writers like Bryan Stevenson and Rachel Lloyd and exploring what stories aren’t being told.

 

Fabulous.

Yeah, it is, it is my favorite course.

 

What would you say are like the biggest differences and similarities between designing for theater or productions and experiences?

I think the biggest similarity are at the end of the day, you’re telling a story and a story has a beginning, a middle, and an end. Crafting that arc all the way through, I think is the same. And, your tools are exactly the same—we have the same senses to take in the story.

Not overwhelming the audience or visitors and pulling focus where you need it, when you need it—those are all the same toolkits. The difference generally speaking is, in traditional theater, the audiences hang in, they’re not moving.

So, even though there’s a wide array of viewing angles, they’re looking at it from one perspective. Whereas when you’re doing experience design, you’re creating things meant to react to people where to a degree you can predict what they’ll do, but people will do what people are going to do.

 

That makes sense.

There are some caveats to that, like one of the shows I did at The Public Theater with Daniel Radcliffe called “Privacy.” It was about getting a federal privacy law on the books in Congress, but the short-term goal was to educate people about data they are unknowingly sending out into the world and how it’s then used. I was brought in as is the tech consultant and ended up designing and programming code for a lot of the interactive elements. We went as far as we could legally go hacking the audience and their phones that night. And, we had clear terms and conditions, so we weren’t fooling anybody.

But, there were moments, like when you had your ticket scanned, we were doing facial tracking next to emotion analysis and recording what your emotional state was at that moment, and then playing it back later in the show or using a Wireshark and WiFi sniffing to see what networks different phones in the audience had connected to in the past and where those overlapped, or, you know, having you send us a photo from your phone, then we could read it’s called the “exit data” so we could know not just the GPS location, but what floor of the building you were on, which cardinal direction you were facing. I think people don’t realize how much data they’re giving away out in the world.

All that to say, there are some shows that are experience design at the same time. Privacy was great prep for working at ESI, because it had both the live experience, reactive and storytelling components, and an ethical backbone to it.

 

Has the merge with NBBJ impacted your work?

We operate as an independent studio within NBBJ’s universe, if you will, and have access to their amazing resources, like a cognitive scientist doing research, anthropologists on staff and archivists. Once COVID hit, we have not just our team doing research, but several of us, including myself, were co-authoring blog posts with people all across NBBJ studios, about what is shifting, what information can garnered from past pandemics and epidemics across the world, and specifically in the United States, culturally, how do people react to these things? So, the research resources have been beyond beneficial of late.

 

How else has your professional life changed since the pandemic began?

Working remotely has been effective in a lot of different ways. But that said, there’s something you miss when you can’t be in the space you’re designing for with your team. As good as 3-D models are, it’s important to actually experience things in real 3-D.

In terms of the projects, the speed of them that has obviously changed, construction has halted. But other than those in construction, thankfully our projects that were active before the novel coronavirus became pandemic level, are still going at essentially the same speed and haven’t discernibly had to change because of the technology—we don’t use touchscreens that often, honestly.

What is exciting, is because people are now thinking about touchless, they’re more open to new ideas than they were before.

 

How have client technology asks changed from mid-2019 to mid-2020?

The Statue of Liberty Museum opened in 2019 and was designed over years prior—it has common-touch surfaces, where at WarnerMedia’s headquarters, which opened six months ago, we built very large LED walls that we made into touch surfaces by embedding LIDAR sensors. Although those were designed to feel like a normal multitouch surface, they work just as well if your finger was about an inch away from the screen. It still works if you touch it, but you don’t have to.

We have one project in Boston that used thermal cameras to recognize people as they walked in without them having to touch anything. We had another series of other projects where we use LIDAR sensors. We’ve looked at using Intel RealSense cameras, in that capacity, they’re more geared towards gesture recognition. So, we’ve been living in that world for a while.

 

Are touchscreens officially a thing of the past?

I don’t think so. One of the psychological terms that that studies talk about is self-protection fatigue. Looking at epidemics and pandemics in the past, we know that people become less willing to engage in safe behaviors over time, and people will only take more costly steps if we feel like there’s a high chance that we’re at risk. 

 

Why do you think touchscreens became the standard?

It’s the Steve Jobs effect. The fact that most of us have phones in our pockets that are touch or multitouch means that we’re familiar with that interaction model and it was easy to scale up.

Touchscreens are prevalent because it felt risk-free because it’s an interaction we already know and feel comfortable with, whereas with gesture technology, people might wonder: What is it? What is it recording? Voice recognition in a public setting would raise questions about what’s being recorded. How would that change the way we interact with each other in public?

 

Can you describe some alternatives to touchscreens?

LIDAR is a great lower cost alternative that I expect that will be a general go-to for gesture interactivity. Time of flight or stereo view cameras—Azure Connect or Intel RealSense—are both really, really strong, interesting ways to work in that space. Something called a single value depth sensor, which is simple and lower cost than LIDAR is an efficient way to get into that space.

I know people have been talking a lot about thermal cameras, in terms of elevated skin temperature sensing, and automated temperature takers, which they can do that. But, they can only do that if you’re not wearing a mask and you’re not wearing glasses. It is, however, a great way to tell when someone has entered, to tell people apart from one another, in software terms, in a general “blob tracking” way, and ideally, it’s one person in the field of view at a time.

Eye tracking is the other one that, that I’ve seen popping up a lot, which is amazing for things like diagnosing concussions, for example. It is fast and accurate, but again, that’s in a one-person-at-a-time, controlled-settings scenario. If you needed wayfinding directions based on eye tracking, I would be skeptical.

 

Are AR and apps a viable alternative?

I think that retailers know that our habits are ingrained, and a large part of brand loyalty comes from that. So, in these moments when our routines fall apart and our habits are completely in flux, shopping patterns are open to change. This is a moment where they’re ready for those interactions to change.

The data analytics that can be collected by having you interact with things through your phone and retailers’ ability to distinctly identify you would be an incentive for a lot of institutions to have you work with your phone. And, from a safety perspective your phone is something most people don’t touch besides you; if you clean it, you know that it is clean. I do think that it’s both a very interesting and important interface that people can, can play with and can tap into. Say there’s a large touchscreen, if you’re on the same WiFi network, there are ways you can now control through your phone.  

I do think everyone from cultural institutions to retailers have an incentive to do that. I think it’s likely to happen, if it does, we all just need to be very careful about what terms and conditions we’re agreeing to when we do that.

I also expect RFID to play a more active role. Burberry has been putting it in their higher-end clothes for a few years and in such a way that it is still active when you leave the store. When you come back, it can trigger media to be tailored specifically to you and sales reps with iPads could get updates about your purchase history, and what you’ve tried on as well. I wouldn’t be surprised if in this moment of looking for touchless ways to track habits or decisions or make media respond in real time, that simple things like RFID become more common.

 

What are your predictions over the next year for existing installations and future installations in the space between now and 2021?

My hopeful prediction is the shifting and habits will cause everybody across the spectrum to be more open to different modes of interactions and that will lead to things that otherwise might not get the funding or support to be fully developed for a public deployment, a new wave of attention. Something like touchless haptics—that’s the new technology that I think hopefully we’ll get more attention as people look at touchless methods of interaction that, that would then get pushed over the finish line in a way it might not have before.

In the next year or so, a major focus will be on designing the beginning of every experience; whatever public space you’re walking into something will need to be front and center, telling you what steps are being taken to keep you and the employees safe.

I think touchscreen kiosks will urge you to interact through a phone app as soon as possible. I think we’ll be pushed in that direction until we feel comfortable with touching common surfaces again. The next generation, I believe will be near touch devices.

Also, with some LED products, if you run at full white, it will create enough heat to kill any bacteria or viruses on that surface—but may not result in a pleasant electricity bill. I recently read about sprays that are being tested to spray on buttons and door handles that prevent viruses or microbes from living on those surfaces.

 

Help me: what is touchless haptics?

Haptics are simulated physical feedback; it’s that same idea, just sent through ultrasound waves. There’s a pad with, with what almost functions like mini speakers sending waves that are inaudible to us, but we can feel on our fingertips, similar to the feedback you get with “force touch” on the iPhone, where it feels like you’re pressing a button and you get that physical sensation back—except it’s in mid-air.

That feedback is what’s missing from gesture recognition; as an experience designer, I think appealing to more senses than just sight and hearing is an exciting opportunity. I don’t know that the technology is really like what I would call “primetime ready” yet.

 

Where do visuals come into play?

The physical object creating the waves would likely be separate from the display. Even though you can order one of these devices now, I don’t think it’s engineered enough to be at a place where it can really be installed in public because it’s like a track pad and sits perpendicular to the display surface. There’s some R&D that needs to happen to create a better experience, but I think the door that it opens is really exciting.

 

Can you rank the “touchless” technologies we’ve talked about from least to most expensive to implement?

  1. Single-value depth sensor
  2. AR (development could drive costs)
  3. RFID
  4. Voice recognition (good speech analysis can drive costs)
  5. LIDAR
  6. Gesture like Intel RealSense or Azure Kinect (development could drive costs)
  7. Gaze-tracking (one-person setting)
  8. Thermal cameras (one-person setting)
  9. Touchless Haptics (too soon to tell, totally custom)

 

 

Seems like you’re pretty busy, would you say more than usual?

I feel like everyone right now is on this polar extreme. Both my wife and I are busier than normal and in keeping with everyone I’ve talked to who has a lot on their plate, we are so grateful for it.

 

In what spare time you have, what have you been like reading, watching or doing lately?

Well, I just started reading “Dance Dance Dance” by Murakami and we just watched “The Half of It” on Netflix. I used to play guitar, violin and piano a bit and my wife sings, so we have like been learning songs together. Our cooking game has certainly gotten a lot stronger.

 

+++

This interview has been edited for length and clarity.