During the Games Developers Conference 2023, we got the opportunity to speak with Jane Hsu, the Head of Business Development, Spatial Computing Product BU at Acer about SpatialLabs.
Interview is edited for clarity
Can you tell us about the announcement that you guys gave a week ago where you gave updates about SpatialLabs and the new profiles that you guys are working on?
Jane Hsu: So far we have 70 games developed on TrueGame, and then we’re still working on how can we provide more games every month.
One thing that we just finalise our development is what we call the Unity and Unreal TrueGame Base Profile. What it does is that because games using these game engines, they kind of share similar rendering pipeline. With the Base Profile take care of some of the basics, but still, each game is different, we can spend more time and tweaking, and optimising performance, and do a lot of testing, so we can focus on those task to make sure that we always give the best experience.
But as an example of this kind of Base Profiles, we worked together with Square Enix. They recently released the latest Octopath Traveler II, and we got the game two weeks before their official release, and the 3D profile was done in three weeks. By the time the [game] released, we were almost ready for the 3D experience.
And this is a win-win. These studios, and game developers using Unity or Unreal, we can really speed up our process to support them on day one.
For the past few years, console games weren’t doing very well for the Japan market because of the lack of PS5. The last two years, a lot of companies like CAPCOM, Square Enix, they started to have their PC porting team.
Do you think that actually helps SpatialLabs in terms of getting more games on console?
Jane Hsu: The way I see it, I wouldn’t say it’s a trend but because of the lack of the hardware. Developers in studios are starting to understand that actually PC gaming can provide very high quality experience as well.
I think back then because PC consoles, they focus only on gaming graphics, and they tend to provide better experience in terms of graphic performance. But now, the whole PC industry are moving so rapidly, especially in the rendering and graphics experience. I see that kind of change, not just from the PS Studios, but also Xbox for sure. Microsoft is also throwing and getting all the games available onto the Windows platform.
I think thanks to the rendering power and how it’s moving, it’s definitely helping by making it available for us to create the SpatialLabs experience, because for the moment we’re really Windows based.
Do you have any upcoming games that you can share with us that you guys are working on?
Jane Hsu: I would say a lot of AAA games, some retro. For instance I can tell you is Diablo III, that’s a good one. For the upcoming ones, we would love to [tell you], but we haven’t got the chance to talk to the studio yet, so let’s see. We would like to see how we can also get out hands on the new upcoming one.
How much computing power does SpatialLabs need compared to traditional 2D rendering?
Jane Hsu: For our hardware, we need two views right? We need one for the left eye, and one for the right eye, so you’re kind of rendering two images. And then it’s 2K [resolution] per eye, so just look up what’s the requirement to render the game in 4K, and then that’s kind of the computing power.
Let’s say if you’re playing God of War, which we do support, then yes, you’ll need a very powerful machine. But if you’re playing Octopath Traveler II, then that takes much less [computing power].
We noticed that the Predator Helios 300 SpatialLabs Edition, they capped it at 60Hz, is it possible to go higher than that?
Jane Hsu: We would love to, but it also depends if you would need an even higher performance system right? That is definitely something we are looking at now, to see how we can get higher frame rates.
Do you think the price point currently for the products for SpatialLabs is a bit on the premium side?
Jane Hsu: Definitely, yes.
The reason why I’m asking this is because the more premium the product, the higher the entry level is. If I want to buy the product, I need to spend X amount of money to try. If the barrier is lower, then more people would get the experience of the product.
What do you think about the price point and do you think that in the future it would get cheaper?
Jane Hsu: I think you can look at it two ways. One is that do we have experience that is good enough that can be branded as a premium experience? For that, I think we did a pretty good job, in terms of visual quality and then everything. You got really sharp and very good quality of the 3D visuals, and for that experience, yes, it’s definitely premium. But it doesn’t mean that we always set it at the premium price point, right?
I think at the moment, not just the system requirement, but also the 3D module production. We are working together with our partner to see how we can increase the yield rate, and then also reduce the price for the production.
But at the moment, I have to admit that it’s a very advanced production process. So unfortunately, within the next twelve months, I would say it’s still probably within that range of cost for a 3D module.
We do see in another maybe three to five years, it can be something very common to anyone.
Since SpatialLabs takes facial recognition into account, do you have any problems with people with different features like varying eye sizes or skin tones? If so, how do you overcome those problems?
Jane Hsu: Yeah, that’s very interesting. Because for tracking, we went through so much because we did [have those problems at a point. We found out that we didn’t have enough support to understand like a darker skin tone, or how we can navigate it, and at the time we even have problem with masks.
When we started with the technology, we were tracking the whole face. And then when you’re wearing a mask, especially if it’s a black mask, we would have problems with that.
Luckily, the best part is it’s machine learning, so we can continue to train the algorithm to understand better. So we start putting more variety, like people with a mask on. We started training our algorithm with people with darker skin tone, Asian faces, all sort of ethnicity.
Sometimes you find out that you don’t have the complete coverage on that, but the good part of this technology is you can train it. We can always input more variety or more different types of people, or even a mask.
Now with the trained algorithm, we can support very well with people with masks on.
When it comes to 3D, VR, or anything related to that, there are a lot of people who experience 3D motion sickness. How does SpatialLabs mitigate or reduce this kind of problem?
Jane Hsu: VR is a little different, like the motion sickness part, it’s because of latency. And also, when in VR, you’re kind of like fully covered in a world that’s different from your reality, and you’re porting and moving, but your brain says “no, you’re not moving”. That’s where the confusion comes and then your brain will be like “something is wrong”.
That’s why for people who are sensitive to this kind of not synchronised experience, some of your visual sense tells you are moving, but your body feels like you’re not moving, so your brain will get confused and then that create sickness. That’s the part of the motion sickness, even though sometimes latency doesn’t get in the way, you have smooth content, but some people will still feel that.
I think 3D is different. I know that in the old days, when we have 3D TV or 3D gaming devices, some people still feel 3D motion sickness. A big reason is because the imagine that is supposed to go into your right eye, got leaked to your left eye. So you get that blur and mixed message, and then your brain starts processing stereo, and then you feel like it doesn’t really match. That’s where the 3D motion sickness comes in with the naked eye.
That’s why we introduce eye tracking, and constantly understand where your eyes are, so we make sure that the pixels that need to go into your left eye doesn’t go to your right eye. The benefit is one you don’t get any 3D sickness, and then the other thing is that you get a really sharp visual, not like the one you experience that isn’t sharp and crisp.
I think the other problem that I might have with SpatialLabs is that, when we tilt our head or move it slightly more than what we’re supposed to, the images just doesn’t match anymore. Is there a way to solve that problem?
Jane Hsu: There are two things about this. For one, it’s not about titling the head, it’s maybe because the eye went out of range of the tracking camera. There’s limitation on the t racking camera implementation on the laptop, because you have such a thin bezel, so the camera lens have a field of view that probably won’t be able to cover all of the movement that you do.
There’s also another thing about 3D, is that when you view 3D from the side, you kind of lose it. You really lose it, it doesn’t matter about tracking your eyes or not, it’s because when it comes to lenticular lens, the design of the lenticular lens will give you a box, where you can actually see through 3D. If you are out of that range, it won’t get your eye positions correctly.
So yeah, there’s still such limitations.I wouldn’t says it’s about the tracking, but instead the display limitation with the 3D module. It doesn’t cover all angles, and if you’re off the screen, then you would have limitations on seeing 3D.
Do you have anything that you think we should know or anything that you want to tell us?
Jane Hsu: I would just like to say that we are here because [at GDC] to understand how the gaming community, like game producers, art designers, feel about this technology [SpatialLabs].
One question I got asked the most, is people would ask me if something is already available, and I told them yes, they couldn’t believe it. They feel like this is just a proof of concept because it’s so futuristic, and it doesn’t feel like it’s something already happening now.
It’s really interesting because stereo 3D is definitely not a new technology. But what I see is that when you introduced it, and then the experience isn’t great, like you said. It might feel blurry when you feel in the past, people would feel 3D sickness and all that.
We think it’s the right timing with all these technologies, with the system performance being way better than five years back, and then also panel with 4K, eye tracking, machine learning AI. All these things comes together can really provide a really good experience. Even though they say “Okay, I’m just curious what kind of stereo 3D”, they realise it’s really not what they thought it would be, and everyone is blown away. I think for them, it’s really a futuristic gaming experience.
I’m really excited and I’m really looking forward to see their comments or what do they see this technology can bring them. A lot of them just asked us “What do we need to do to get our games [to be] like this.”
I guess we really got their attention and their stamp of approval. So, I’m really happy to be here [at GDC] and getting direct feedback from the gaming community.
We would like to thank Jane Hsu for thanking the time to answer our questions. For more information on SpatialLabs and other Acer products, do check the official Acer website here.
Discussion about this post