Skip to main content

AI-Powered Spaces That Work for Your Business: With Q-SYS

Christopher Jaynes

Struggling to keep your hybrid workforce engaged and productive? Enter high-impact spaces, driven by the transformative power of AI and changing the way we work and interact in both physical and digital spaces.

In this episode we dive into the exciting possibilities of high-impact spaces, exploring their potential alongside the technology, tools, and partnerships making them a reality.

Listen Here

Apple Podcasts      Spotify      Amazon Music

Our Guest: Q-SYS

Our guest this episode is Christopher Jaynes, Senior Vice President of Software Technologies at Q-SYS, a division of the audio, video, and control platform provider QSC. At Q-SYS, Christopher leads the company’s software engineering as well as advanced research and technologies in the AI, ML, cloud, and data space.

Podcast Topics

Christopher answers our questions about:

  • 2:19 – High-impact spaces versus traditional spaces
  • 4:34 – How AI transforms hybrid environments
  • 10:02 – Various business opportunities
  • 12:59 – Considering the human element
  • 16:23 – Necessary technology and infrastructure
  • 19:24 – Leveraging different partnerships
  • 21:10 – Future evolutions of high-impact spaces

Related Content

To learn more about high-impact spaces, read Exploring the World of AI-Powered, High-Impact Spaces and High-Impact Spaces Say “Hello!” to the Hybrid Workforce. For the latest innovations from Q-SYS, follow them on X/Twitter at @QSYS_AVC and LinkedIn at QSC.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re diving into the world of high-impact AI-powered spaces with Christopher Jaynes from Q-SYS.

But before we get started, Chris, what can you tell our listeners about yourself and what you do at Q-SYS?

Christopher Jaynes: Yeah, well, thanks for having me here. This is exciting. I can’t wait to talk about some of these topics, so it’ll be good. I’m the Senior Vice President for Software and Technology at Q-SYS. So, we’re a company that enables video and audio and control systems for your physical spaces. But in reality I’m kind of a classical computer scientist. I was trained as an undergrad in computer science, got interested in robotics, followed a path into AI fairly quickly, did my PhD at University of Massachusetts.

And I got really interested in how AI and the specialized applications that were starting to emerge around the year 2000 on our desktops could move into our physical world. So I went on, I left academics and founded a technology company called Mersive in the AV space. Once I sold that company, I started to think about how AI and some of the real massive leaps around LLMs and things were starting to impact us.

And that’s when I started having conversations with QSC, got really interested in where they sit—the intersection between the physical world and the computing world—which I think is really, really exciting. And then joined the company as their Senior Vice President. So that’s my background. It’s a circuitous path through a couple different industries, but I’m now here at QSC.

Christina Cardoza: Great. Yeah, can’t wait to learn a little bit more about that. And Q-SYS is a division of QSC, just for our listeners. So I think it’s interesting—you say in the 2000s you were really interested in this, and it’s just interesting to see how much technology has advanced, how long AI has been around, but how it’s hitting mainstream or more adoption today after a couple of decades. So I can’t wait to dig into that a little bit more.

But before we get into that, I wanted to start off the conversation. Let’s just define what we mean by high-impact spaces: what are traditional spaces, and then what do we mean by high-impact spaces?

Christopher Jaynes: Yeah. I mean, fundamentally, I find that term really interesting. I think it’s great. It’s focused on the outcome of the space, right? In the old days we’d talk about a huddle room or a large conference room or a small conference room. Those are physical descriptions of a space—not so exciting. I think what’s way more interesting is what’s the intended impact of that space? What are you designing for? And high-impact spaces, obviously, call out the goal. Let’s have real impact on what we want to do around this human-centered design thing.

So, typically what you find in the modern workplace and in your physical environments now after 2020 is a real laser focus on collaboration, on the support of hybrid work styles, deep teamwork, engagement—all these outcomes are the goal. And then you bring technology and software and design together in that space to enable certain work styles really quickly—quick and interesting.

I’ll tell you one thing that’s really, really compelling for me is that things have changed dramatically. And it’s an easy thing to understand. I am at home in my home office today, right? But I often go into the office. I don’t go into the office to replicate my home office. So, high-impact spaces have gotten a lot of attention from the industry because the intent of a user or somebody to go into their space is to find something that they can’t get at home, which is this more interesting, higher-impact, technology-enabled—things you can do there together with your colleagues like bridge a really exciting collaborative meeting with remote users in a seamless way. I can’t do that here, right?

Christina Cardoza: I think that’s a really important point, especially as people start going back to the office more, or businesses have initiatives to get some more people more back in the office or really increase that hybrid workspace. Employees may be thinking, “Well, why do I have to go into the office when I can just do everything at home?” But it’s a different environment, like you said, a different collaboration that you get. And of course, we’ve had Zoom, and we have whiteboards in the office that give us that collaboration. But it’s how is AI bringing it to the next level or really enhancing what we have today?

Christopher Jaynes: Well, let me first say I think the timing couldn’t be better for some of the breakthroughs we’ve had in AI. I’ve been part of the AI field since 1998, I think, and watching what’s happened—it’s just been super exciting. I mean, I know most of the people here at QSC are just super jazzed about where this all goes—both because it can transform your own company, but what does it do about how we all work together, how we even bring products to market? It’s super, super timely.

If you look at some of the bad news around 2020, there’s some outcomes in learning and employee engagement that we’re all now aware of, right? There’s some studies that have come out that showed: hey, that was not a good time. However, if you look back at the history of the world, whenever something bad like this happens, the outcome typically means we figure it out and improve our workplace. That happened in the cholera epidemic and some of the things that happened way back in the early days.

What’s great now is AI can be brought to bear to solve some of these, what I’d call grand challenges of your space. These are things like: how would I take a remote user and put them on equal footing, literally equal footing from an engagement perspective, from an understanding and learning perspective, from an enablement perspective—how could I put them on an equal footing with people that are together in the room working together on a whiteboard, like you mentioned, or brainstorming around a 3D architectural model. How does all of that get packaged up in a way that I can consume it as a remote user? I want it to be cinematic and engaging and cool.

So if you think about AI in that space, you have to start to think about classes of AI that take—they leverage generative models, like these large language models. But they go a little bit past that into other areas of AI that are also starting to undergo their own transformations. So, these are things like computer vision; real-time audio processing and understanding; control and actuation; so, kinematics and robotics. So what happens, for example, when you take a space and you equip it with as many sensors, vision sensors, as you want? Like 10, 15 cameras—could you then take those cameras and automatically track users that walk into the space, track the user that is the most important at the moment? Like where would a participant’s eyes likely to be tracking a user if they’re in the room versus people out? How do you crop those faces and create an egalitarian view for remote users?

So that’s some work we’re already doing now that was part of what we’re doing with Vision Suite, the Q-SYS Vision Suite. It’s all driven by some very sophisticated template and face tracking, kinesthetic understanding of the person’s pose—all this fun stuff so that we can basically give you the effect of a multi-camera director experience. Somebody is auto-directing that—the AI is doing it—but when you’re remote you can now see it in exciting ways.

Audio AI—so it’s really three pillars, right? Vision AI, audio AI, and control or closed-loop control and understanding. Audio AI obviously can tag speakers and auto-transcribe in a high-impact space—that’s already something that’s here. If you start to dream a little further you can say, what would happen if all those cameras could automatically classify the meeting state? Well, why would I want to do that? Is it a collaborative or brainstorming session? Is it a presentation-oriented meeting?

Well, it turns out maybe I change the audio parameters when it’s a presentation of one to many, versus a collaborative environment for both remote and local users, right? Change the speaker volumes, make sure that people in the back of the room during the presentation can see the text on the screen. So I autoscale, or I choose to turn on the confidence monitors at the back of that space and turn them off when no one’s there to save energy.

Those are things that people used to shy away from in the AV industry because they’re complicated and they take a lot of programming and specialized behaviors and control. You basically take a space that could have cost you $100K and drive it up to $500,000, $600,000 deployments. Not repeatable, not stepable.

We can democratize all that through AI control systems, generative models that summarize your meeting. What would happen, for example, if you walked in, Christina, you walked into a room and you were late, but the AI knew you were going to be late and auto-welcomed you at the door and said, “The meeting’s been going for 10 minutes. There’s six seats at the back of the room. It’s okay, I’ve already emailed you a summary of what’s happened so that you can get back in and be more engaged.” That’s awesome. We should all have that kind of stuff, right? And that’s where I get really excited. It’s that AI not on your desktop, not for personal productivity, but where it interacts with you in the physical world, with you in that physical space.

Christina Cardoza: Yeah, I think we’re already seeing some small examples of that in everyday life. I have an Alexa device that when I ask in the morning to play music or what the weather is, it says, “Oh, good morning, Christina.” And it shows me things on the screen that I would like more personalized than anybody else in my home. So it’s really interesting to see some of these happening already in everyday life.

We’ve been predominantly talking about the business and collaboration in office spaces. I think you started to get into a couple of different environments, because I can imagine this being used in classrooms or lecture halls, stores—other things like that. So can you talk about the other opportunities or different types of businesses that can leverage high-impact spaces outside of that business or office space? If you have any customer examples you want to highlight or use cases you can provide.

Christopher Jaynes: We really operate—I just think about it in the general sense of what your physical and experience will look like. What’s that multi-person user experience going to be when you walk into a hotel lobby? How do you know what’s on the screens? What are the lighting conditions? If you have an impaired speaker at a theme park, how do you know automatically to up the audio levels? Or if somebody’s complaining in a space that says, “This sounds too echoy in here,” how do you use AI audio processing to do echo cancellation on demand?

So that that stuff can happen in entertainment venues; it can happen in hospitality venues. I tend to think more about the educational spaces partly because of my background. But also the enterprise space as well, just because we spend so much time focusing on that and we spend a lot of time in those spaces, right?

So, I want to make one point though: when we think about the use cases, transparency of the technology is always something I’ve been interested in. How transparent can you make the technology for the user? And it’s kind of a design principle that we try to follow. If I walk into a classroom or I walk into a theme park, in both of those spaces if the technology becomes the thing I’m thinking about, it kind of ruins this experience, right?

Like if you think about a classroom where I’m a student and I’m having to ask questions about: “Where’s the link for the slides again?” or, “I can’t see what’s on monitor two because there’s a pillar in the way. Can you go back? I’m confused.” Same thing if I go to a theme park and I want to be entertained and immersed in some amazing new approach to—I’m learning about space, or I’m going through a journey somewhere, and I’m instead thinking about the control system seems slow here, right?

So you need to basically set the bar so high, which I think is fun and interesting. You set the technology bar so high that you can make it transparent and seamless. I mean, when was the last time you watched a sci-fi movie? It was kind of like sci-fi movies now have figured that out, right? All the technology seems almost ghostly and ephemeral. In the 60s it was lots of video people pushing buttons and talking and interacting with their tech because it was cool. That’s not where we want to be. It should be about the human being in those spaces using technology; it makes that experience totally seamless.

Christina Cardoza: Yeah, I absolutely agree. You can have all the greatest technology in the world, but if people can’t use it or if it’s too complicated, it almost becomes useless. And so that was one of my next questions I was going to ask, is when businesses are approaching AI how are they considering the human element in all of this? How are humans going to interact with it, and how do they make sure that it is as non-intrusive as possible?

Christopher Jaynes: Yeah. And the word “intrusive” is awesome, because that does speak to the transparency requirement. But then that does put pressure on companies thinking through their AI policy, because you want to reveal the fact that your experience in the workplace, the theme park, or the hotel are all being enabled by AI. But that should be the end of it. So you’ve got to think through carefully about setting up a clear policy; I think that’s really, really key. Not just about privacy, but also the advantages and value to the end users. So, a statement that says, “This is why we think this is valuable to you.”

So if you’re a large bank or something, and you’re rolling out AI-enabled spaces, you’ve got to be able to articulate why it is valuable to the user. A privacy statement that aligns with your culture, of course, is really key. And then also allow, like I mentioned, allowing users to know when AI is supporting them.

In my experience, though, the one thing I think that’s really interesting is users will go off the rails and get worried—and also they should be, when a company doesn’t clearly link those two things together. And I mean also the vendors. So when we build systems, we should be building systems that support the user from where the data is being collected, right? I mean the obvious example is if I use Uber, then Uber needs to know where I’m located. That’s okay. Because I want them to know that—that’s the value that I’m getting so they can drive a car there, right?

If you do the same in your spaces—like you create a value loop that allows a user as they get up in a meeting and as they walk away, their laptop is left behind. But the AI system can recognize a laptop—that’s a solved problem—and auto-email me because it knows who I am. That’s pretty cool. And say, “Chris, your laptop’s in conference room 106. There’s not another meeting for another 15 minutes. Do you want me to ticket facilities, or you want to just go back and get it?”

That kind of closed-loop AI processing is really valuable, but you need to be thinking through all those steps: identity, de-identification for GDPR—that’s super, super big. And if you have this kind of meeting concierge that’s driving you that’s an AI, you have to think through where that data lives. You’d have to be responsible about privacy policies and scrubbing it. And then if a company is compliant with international privacy standards, make that obvious, right? Make it easy to find, and articulate it cleanly to your users. And then I think adoption rates go way up.

Christina Cardoza: Yeah. We were talking about the sci-fi movies earlier, where you had all the technologies and pushing buttons, and then we have the movies about the AI where it’s taking over. And so people have a misconception of what AI or this technology is really—how it’s being implemented. So, I agree: any policies or any transparency of how it’s supposed to work, how it is working, just makes people more comfortable with it and really increases the level of adoption.

You mentioned a couple of different things that are happening with lighting, or echo cancellation, computer vision. So I’m curious what the backend of implementing all of this looks like—that technology or infrastructure that you need to set up to create high-impact spaces. Is some of this technology and infrastructure already in place? Is it new investments? Is it existing infrastructure you can utilize? What’s going on there?

Christopher Jaynes: Yeah, that’s a great question, yeah. Because I’ve probably thrown out stuff that scares people, and they’re thinking, “Oh my gosh, I need to go tear everything out and restart, building new things.” The good news is, and maybe surprisingly, this sort of wave of technology innovation is mostly focused on software, cloud technologies, edge technologies. So you’re not necessarily having to re-leverage things like sensors, actuators, cameras and audio equipment, speakers and things.

So for me it’s really about—and this is something I’ve been on the soapbox on for a long time—if you can have a set of endpoints—this is one reason I even joined QSC—endpoints, actuators, and connect those through a network—like a commodity, true network, not a specialized network, but the internet, and attach that to the cloud. That to me is the topology that enables us to be really fast moving.

So that’s probably very good news to the traditional AV user or integrator, because once you deploy those hardware endpoints, as long as they’re driven by software the lifecycle for software is much faster. A new piece of hardware comes out once every four or five years. We really can release software two, three times a year, and that has new innovation, new algorithms, new approaches to this stuff.

So if you really think about those three pillars: the endpoints—like the cameras, the sensors, all that stuff in the space—connected to an edge or control processor over the network, and then that thing talking to the cloud—that’s what you need to get on this sort of train and ride it to software future because now I can deliver software into the space.

You can use the cloud for deeper AI reasoning and problem-solving for inference-generation. Analytics—which we haven’t talked about much yet—can happen there as well. So, insights about how your users are experiencing the technology can happen there. Real-time processing happens on that edge component for low latency, echo cancellation, driving control for the pan tilts—so the cameras in the space—and then the sensors are already there and deployed. So that, to me, is those three pieces.

Christina Cardoza: And I know the company recently acquired Seervision—and insight.tech and the IoT Chat also are sponsored by Intel—so I imagine you’re leveraging a lot of partnerships and collaborations to really make some of this, like the real-time analytics, happen—those insights be able to make better decisions or to implement some of these things.

So, wanted to talk a little bit more about this: the importance of your partnership with Intel, or acquiring companies like Seervision to really advance this domain and make high-impact spaces happen.

Christopher Jaynes: Oh, that’s an awesome question. Yeah, I should mention that QSC, the Q-SYS product, the Q-SYS architecture, and even the vision behind it was to leverage commodity compute to then build software for things that people at the time when it was launched thought, “No, you can’t do that. You need to go build a specialized FPGA or something custom to do real-time audio, video, and control processing in the space.” So really the roots of Q-SYS itself are built on the power of Intel processing, really, which was at the time very new.

Now I’m a computer scientist, so for me that’s like, okay, normal. But it took a while for the industry to move out of that—the habit of building very, very custom hardware with almost no software on it. With Intel processors we’re able to build—be flexible and build AV processing. Even AI algorithms now, with some of the on-chip computing stuff that’s happening, can be leveraged with Intel.

So that’s really, really cool. It’s exciting for us for sure, and it’s a great partnership. So we try to align our roadmaps together, especially when we have opportunity to do so, so that we’re able to look ahead and then deliver the right software on those platforms.

Christina Cardoza: Looking ahead at some of this stuff, I wanted to see, because things are changing rapidly every day now—I mean, when you probably first got into this in 1998 and back in the 2000s, things that we have today were only a dream back then, and now it’s reality. And it’s not only reality, but it’s becoming smarter and more intelligent every day. So how do you think the future of high-impact spaces is going to evolve or change over the next couple of years?

Christopher Jaynes: I feel like you’re going to find that there is a new employee that follows you around and supports your day, regardless of where you happen to be as you enter and leave spaces. And those spaces will be supported by this new employee that’s really the AI concierge for those spaces. So that’s going to happen faster than most people, I think, even realize.

There’s already been an AI that’s starting to show up behind the scenes that people don’t really see, right? It’s about real-time echo canceling or sound separation—audio-scene recognition’s a great one, right? That’s already almost here. There’s some technologies and some startups that have brought that to bear using LLM technologies and multi-modal stuff that’ll make its appearance in a really big way.

And the reason I say that is it’ll inform recognition in such a powerful way that not only will cameras recognize what a room state is, but the audio scene will help inform that as well. So once you get to that you can imagine that now you can drive all kinds of really cool end-user experiences. I’ll try not to speculate too much, because some of them we’re working on and they’ll only show up in our whisper suites until we actually announce them. But imagine the ability to drive to your workplace on a Tuesday, get out of your car, and then get an alert that says, “Hey, two of your colleagues are on campus today, and one of them is going to hold the meeting on the third floor. I know you don’t like that floor because of the lighting conditions, but I’ve gone ahead and put in a support ticket, and it’s likely going to be fixed for you before you get there.”

So there’s this like, in a way you can think about the old days of your spaces as being very reactive or even ignored, right? If something doesn’t work for me or I arrive late—like my example I gave you earlier of a class—it’s very passive. There’s no “you” in that picture; it’s really about the space and the technology. What AI’s going to allow us to do is have you enter the picture and get what you need out of those spaces and really flip it so that those technologies are supporting your needs almost in real time in a closed-loop fashion.

I keep saying that “closed loop.” What I mean is, the sensing that happened and has happened—maybe it’s even patterns from the last six, seven months—will drive your experience in real time as you walk into the room or as you walk into a casino or you’re looking for your hotel space. So I think there’s a lot of thinking going into that now, and it’s going to really make our spaces far more valuable for far less—way more effective for a far less cost, really, because it’s software-driven, like I mentioned before.

Christina Cardoza: Yeah, I think that’s really exciting. I’m seeing that employee follow around a little bit in the virtual space when I log into a Zoom or a Teams meeting; the project manager always has their AI assistant already there that’s taking notes, transcribing it, and doing bullet points of the most important things. And that’s just on a virtual meeting. So I can’t wait to see how this plays out in physical spaces where you don’t have to necessarily integrate it yourself: it’s just seamless, and it’s just happening and providing so much value to you in your everyday life. So, can’t wait to see what else happens—especially from Q-SYS and QSC, how you guys are going to continue to innovate from this space.

But before we go, just want to throw it back to you one last time. Any final thoughts or key takeaways you want to leave our listeners with today?

Christopher Jaynes: Well, first let me just say thanks a lot for hosting today; it’s been fun. Those are some really good questions. I hope that you found the dialogue to be good. I guess the last thought I’d say is, don’t be shy. This is going to happen; it’s already happening. AI is going to change things, but so did the personal computer. So did mobility and the cell phone. It changed the way we interact with one another, the way we cognate even, the way we think about things, the way we collaborate. The same thing’s happening again with AI.

It’ll be transformative for sure, so have fun with it. Be cautious around the privacy and the policy stuff we talked about a little bit there. You’ve got to be aware of what’s happening, and really I think people like me, our job in this industry is to dream big at this moment and then drive solutions, make it an opportunity, move it to a positive place. So it’s going to be great. I’m excited. We are all excited here at Q-SYS to deliver this kind of value.

Christina Cardoza: Absolutely. Well, thank you again for joining us on the podcast and for the insightful conversation. Thanks to our listeners for tuning into this episode. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

About the Author

Christina Cardoza is an Editorial Director for insight.tech. Previously, she was the News Editor of the software development magazine SD Times and IT operations online publication ITOps Times. She received her bachelor’s degree in journalism from Stony Brook University, and has been writing about software development and technology throughout her entire career.

Profile Photo of Christina Cardoza