‘Turing’ the Landscape for NextGen Digital Experiences

Jeff Yurek
Published 11/15/2022
Share this on:

NextGen Digital ExperiencesEver since English scientist and inventor Charles Wheatstone invented the stereoscope back in 1838, we’ve seen huge leaps in the augmented and virtual reality world as innovators have been trying to mimic more realistic, immersive display experiences. However, we still have a long way to go to blur the lines between the real world and the digital world.

That’s why I was so intrigued and excited when in 2022, Meta CEO Mark Zuckerberg announced that the company aims to pass a “Visual Turing Test” with a VR headset that provides visuals indistinguishable from the real world. Since then, Meta and its Reality Labs have been working to figure out what it takes to build next-generation displays for its virtual/augmented/mixed reality headsets.

“Displays that match the full capacity of human vision are going to unlock some really important things,” Zuckerberg said in a Meta video. “The first is a realistic sense of presence. That’s the feeling of being with someone or in a place as if you’re physically there…The other reason why these realistic displays are important is they are going to unlock a whole new generation of visual experiences.”

To any display technologist listening, that’s kind of an outrageous and borderline impossible goal. No current display technology has been able to come even remotely close to passing a visual Turing Test, something I’m sure Alan Turing would challenge the industry to do.

This has inspired me to take a deeper dive into human centric design and think more about how we as an industry can push the envelope on creating better human experiences in digital environments, not just from the display side, but also by more innovation on the software, sensor and chips that drive these experiences.

 


 

Want More Tech News? Subscribe to ComputingEdge Newsletter Today!

 


 

What would it take to create a display system capable of passing this test and fooling people into thinking they were seeing reality? From a display technology perspective, it’s pretty straightforward — you just need a lot more, a lot better, and faster pixels.

Sounds simple, but there are some pretty major technological leaps required before we can imagine a display that can physically produce light that fools our eyes into perceiving reality. Just think about the display in a high-end smartphone that most of us are familiar with today; how much more would it need to improve? The human visual system turns out to be pretty impressive. Resolution and brightness would need to increase by around an order of magnitude. Color is also woefully inadequate compared to what our eyes can perceive. Today’s displays only recreate around 45% of the range of colors our eyes can detect, so that would have to increase by more than double. Today’s pixels are also a little slow compared to our speedy visual system, with around a 50% increase in frame rate needed compared to a common smartphone.

Display technologists of all stripes are already hard at work on these big challenges: LCDs, MiniLEDs, OLEDs, Quantum Dots, and microLEDs all offer different roadmaps to deliver these amazing experiences. It will be fun to watch over the next few years, but it won’t be enough.

So, let’s say we succeed in building such a panel. Is that all we need to pass the test? Turns out that better display panels and technologies are not enough. I believe software, sensors, and chips will play an increasingly critical role in driving the future of more realistic display technologies and immersive experiences. Let’s take a closer look at each.

Swift Sensors


Sensors are a key component of any human-centric display system; they enable displays to adapt to the environment to deliver better, more immersive experiences, and can even improve battery life.

We probably are all familiar with the ambient light sensor on most modern smartphones that constantly adjusts the display brightness to match the ambient light. This saves power by always keeping the display brightness at the appropriate level and makes the display more readable for users.

This can be taken much further with deeper integration between sensors, chips, and displays. One example is Intel, which is doing some really interesting work with its Visual Sensing Technology. In this case, the sensors allow the device to be aware of not just the ambient light but the user’s attention. Users can unlock and turn on the device just by looking at it, and when they look away, the screen dims to save power. It’s a bit like the engine in your car turning off at every stop light. A bunch of small reductions in battery use can really add up to increased battery life.

In XR applications, sensors can be even more impactful to the experience. Here, sensors can track where we are looking to deliver sharper images with less GPU power with foveated rendering. They can see our facial expressions to enable us to communicate more effectively, and also map the physical environment around us so we don’t bump into any walls and/or so we can interact with the physical world in AR.

One downside to sensors, however, is that they can get in the way. Display notches and cut-outs for sensors may soon be a thing of the past. One company doing some interesting work on under-panel sensors so displays can go edge-to-edge for a more immersive experience is OTI. I hope to see more of these innovations in the near future.

‘Chipping’ Away


Today’s VR experience is still at a relatively primitive stage in terms of realism, yet still pushes processing power to the bleeding edge. Immersive digital experiences are going to require a tremendous increase in computing horsepower and efficiency. Perhaps the most obvious example is the sheer GPU power needed to render a realistic environment or human avatar. Reconstruction and rendering of life-like avatars alone will demand tremendous computational power that is not available today; this is only one piece of the processing power puzzle. We will need the industry to continue to push the envelope on more powerful and highly energy-efficient processors that can, for example, enable an average VR headset to handle extended reality experiences such as a spherical view that captures the scene in vivid detail, sensing the surroundings via LiDAR (light detection and ranging), while delivering rich 3D high-fidelity audio in all directions.

Will we ever get there? I’m optimistic that the industry will steadily chip away at this. That, combined with innovative cloud computing solutions that can offload some of the most intensive rendering work and faster networks for low latency, can deliver the required horsepower in the coming years.

Software Solutions


All these amazing pixels, sensors, and chips are worth little without great software to bring them to life. Even if we can deliver on all of the above hardware challenges, the experience won’t be compelling without great software. Content creators and studios will need to build new tools to create compelling digital experiences that bring the most out of the hardware.

Software developers have been thinking about this since the late 1990s, and smartphones ushered in a new era of development in the mid-2000s with handheld AR experiences. Modern software development kits now enable powerful functionality that brings together the physical and digital worlds in new ways. The latest headsets allow virtual avatars to be more expressive by reading facial expressions and recreating those virtually. Software can also read the environment and recognize 3D objects. The tools are becoming increasingly sophisticated and easier to use. I would challenge the software industry to ‘show us what you got for future human digital experiences. The display industry needs you.

Challenges & Collaboration


To make all this a (virtual) reality, the industry must overcome a few challenges first. The problems that need to be solved in software and hardware to create truly immersive AR/VR experiences are crazy. But I believe we can overcome these with the right innovations and collaborations. For example, we need more energy-efficient displays that require breakthroughs in display design and materials to capture more of the light produced by the display, as well as more efficient chips.

To drive the future of AR/VR, the software, sensors, and display hardware all need to work in concert to deliver better display experiences for humans. The display industry could benefit greatly from more collaboration with software, sensor, and chip companies to get us to the next level. We need these innovators to push the limits of their software to help enable more realistic human interactions. This cross-collaboration will no doubt fuel more innovation in this space. Very few markets have interconnectivity with other markets like AR/VR. So we should take advantage of that.

This is an area I’m especially interested in right now. It’s all about the immersive experiences we are trying to create.

I had a conversation about this topic recently with display industry pioneer and visionary Tara Akhavan, who had some great insights about this. She told me, “The post-Covid era and events such as the chip shortage are proving to us more than ever that the siloed industry is not sustainable. We need cross-industry and ecosystem partnerships more than ever to create the most optimized display UX (user experience) demanded by consumers. Targeted collaborations have never been as key a part of the display ecosystem as they are today. We see across-the-board more and more announcements in this regard.”

It’s great advice for our industry. Displays are improving every day, but I believe software, sensors, and chip companies can contribute more to help us overcome these challenges and vastly improve the human experience.

I can’t wait to see what you all come up with.

About the Writer


Jeff Yurek is a former creative professional turned marketer passionate about storytelling and technology. He is currently Vice President of Marketing at Nanosys, Inc. in Silicon Valley, Calif., and serves as the Vice Chair of Marketing for the Society for Information Display (SID’s Display Week 2023 will be held in Los Angeles, Calif. May 21-25, 2023). Contact Jeff at jeff@jeffyurek.com.