Skip to main content


QSRs—Voice AI Will Now Take Your Order: With Sodaclick

Salwa Al-Tahan, Stevan Dragas

Join us for our very first episode of “ Talk” where we discuss how voice AI transforms the QSR experience—boosting efficiency and creating a smoother experience both for customers and employees.

Just as our new name reflects the ever-changing tech landscape, this episode explores how voice assistants enable QSRs to take orders faster and more accurately, reducing staff workload and handling complex requests. The result: shorter lines, happier customers, and more consistent service.

Listen in as we explore benefits, address potential challenges, and peek into how voice AI impacts other areas of the industry.

Listen Here

Apple Podcasts      Spotify      Amazon Music

Our Guests: Sodaclick and Intel

Our guests this episode are Salwa Al-Tahan, Research and Marketing Executive for Sodaclick, a digital content and AI experience provider; and Stevan Dragas, EMEA Digital Signage Segment Manager for Intel. At Sodaclick, Salwa focuses on bringing awareness about the benefits of conversational AI across all industries. Stevan has been with Intel for more than 24 years, where he works to drive development of EMEA digital signage and display benefits of Intel tools and technologies.

Podcast Topics

Salwa and Stevan answer our questions about:

  • 6:02 – How voice AI enhances QSR experiences
  • 12:53 – Voice AI infrastructure and investments
  • 15:08 – Technological advancements making voice AI possible
  • 20:15 – Real-world examples of voice AI in QSRs
  • 24:57 – Voice AI opportunities beyond QSRs

Related Content

To learn more about conversational voice AI, read Conversational Voice AI May AI Take Your Order and Talking Up Self-Serve Patient Check-In Kiosks in Healthcare. For the latest innovations from Sodaclick, follow them on Twitter at @sodaclick and on LinkedIn. For the latest innovations from Intel, follow them on Twitter at @Intel and on LinkedIn.


Christina Cardoza: Hello and welcome to the “ Talk.” I’m your host, Christina Cardoza, Editorial Director of And some of our long-term listeners probably have already picked up that we’ve updated our name from the IoT Chat to “ Talk,” and that’s because, as you know, this technology space is moving incredibly fast, and we wanted to reflect the conversations that we will be having beyond IoT. But don’t worry, you’ll still be getting the same high-quality conversations around IoT technology, trends, and latest innovations. This just allows us to keep up with the pace of the industry.

So, without further ado, I want to get into today’s conversation, in which we’re going to be talking about voice AI in quick-service restaurants with Sodaclick and Intel. So, as always, before we jump into the conversation, let’s get to know our guests. Salwa from Sodaclick, I’ll start with you. What can you tell us about yourself and Sodaclick?

Salwa Al-Tahan: Hi, Christina. Thank you. So I’m Salwa Al-Tahan, Head of Marketing and Research at Sodaclick. Thank you for inviting me to join this podcast. So, Sodaclick is a London-based AI company. We actually started, for those that don’t know, in 2017 as a digital-content platform. But AI was always part of their vision. And in 2019 they actually opened up the AI division, and that was primarily focusing on voice AI, although that was quite linear, it was command-driven. And they always knew that it needed to be more natural, more human-like, more conversational.

So, the co-founders are really hot on being at the forefront of technology, always innovating, always looking to improve. And, with the advent of generative AI, they started fine-tuning their LLM, and that’s where we are now. Now we’re a London-based company with a global presence.

Christina Cardoza: Great! Looking forward to digging into some of that. Especially making voice AI more natural, because I’m sure a lot of people have had the displeasure of those customer service voice AI chatbots that you’re always screaming at on the phone, trying to get it to understand you, or trying to get where you need to go and trying to talk to a human. So, looking forward to how that’s not only being brought into the restaurant space, but I know Sodaclick does things in many other industries. So we’ll dig into that a little bit in our conversation.

But before we get there, Stevan, welcome to the show. What can you tell us about yourself?

Stevan Dragas: That’s interesting. So, Stevan Dragas. Why it’s interesting is because over the last 24 years in Intel, I’ve done so many exciting roles and positions. And on the recent visit, where I had the pleasure of taking Sodaclick to join Intel Vision in U.S., Ibrahim, one of the founders of Sodaclick, actually reminded me that at the time, in 2019, when they moved into the voice, that that was the first time he met me. I kind of, unfortunately, forgot that. But he reminded me that that time we met for the first time, and I kind of gave them some hints, advice, what would work, what did not work. And, unfortunately have to say, I’m almost glad that they listened to me at that time.

Because with what we are doing, what Sodaclick at the moment is doing, we cover both edge, from the edge to the cloud, and driving ultimately the new usage models, driving user experience, driving benefits starting from the end user to retailer to the operator of QSR. But ultimately we are driving new experience in usage models and benefits, and changing the industries.

Now, my role is basically to promote and support Intel platform products, technologies, software, across multiple vertical industries, from which QSR is just one of the vertical industries. So I go horizontal, and I have a number of companies which are just as exciting as Sodaclick, but they’re one of my, let’s say, crown jewels that I am actually pleased and happy that over the last couple of years we really accelerated and will continue.

Specifically because we are now looking into adding some of the new products that Intel actually brought to the market. Not necessarily just the new products that are for the cloud, but also introducing for the first time in the computing industry, to call it, the new products which actually have not just anymore CPU and GPU but also NPU. And in May Sodaclick will be demonstrating and using their product on this new platform. It used to be called Meteor Lake, but it’s actually Core Ultra platform.

And really exciting to work across all of these industries, specifically with Sodaclick. They have been so good, and I’m happy to also say that we are looking to a lot more than just the QSR-type of restaurants, because many industries and vertical industries’ solutions would benefit from some kind of conversational discussion from asking an open question, rather than pre-scripted, menu-driven, type of conversation with the machine.

Salwa Al-Tahan: Yeah, command-driven is so linear and boring. And, like you say, frustrating to customers as well. These natural interactions with the conversational voice AI is definitely the future and the way it is being deployed at the moment.

Christina Cardoza: Yeah, absolutely. So, let’s get into that a little bit. Specifically, looking at the quick-service restaurants: how it is being deployed and used in those areas. Salwa, if you can give us a little bit more what Sodaclick is doing in this space? And how you’re seeing voice AI improve or enhance the QSR experiences?

Salwa Al-Tahan: There’s two aspects to the QSR industry which are benefiting from the integration of voice AI. One is in-store. So, we’re seeing a handful, I would say, of QSR brands actually integrating voice AI into their in-store kiosk to make them truly omnichannel. The other aspect is at the drive-through. So, it becomes the first interaction for a customer as they drive up to the order-taking point—you’ve got your conversational voice AI assistant there. These are the two main focuses at the moment.

And, to be honest, each one comes with its different benefits, I would say. And its different benefits both to the business and to the customer. So, at the in-store kiosk it’s faster. If you think about, if you know exactly what you want going up to a kiosk, having to scroll through the UI, adding extra lettuce, or removing cheese, or these little things—no ice—you actually have to scroll through and it takes time. Whereas it’s faster for you just to say it. And that faster interaction means that you can get a faster throughput as well. You can serve more customers, reduce wait times.

Also, in in-store kiosks, it becomes more inclusive. Having voice AI as an option to customers means that any customers with visual impairments, physical impairments, sensory processing disorders, even the elderly who struggle with accurately touching those touch points to place an order—it becomes much more inclusive to them. They’re able to use their voice for that interaction. So these are some key benefits obviously, as well as upselling opportunities.

At the drive-through it’s a completely different interaction. It’s again polite, it’s friendly, it’s allowing businesses to unify their brand image with excellent customer service. It’s improving that order accuracy. I know from the QSR annual report for their drive-throughs in 2023, order accuracy improved by 1%. It was 85% in 2022, it moved up to 86% in 2023. With voice AI we’re actually able to bring that up to 96%-plus.

And that is because at the order point it’s quite a repetitive task for members of staff. They’re just constantly doing the same thing. That means that sometimes, unfortunately, you’re not getting the friendly customer service, you’re not getting that bubbly person at the end of their shift. Humans are humans, though. They might be having a bad day. They might not have all the product information that you’re after.

Whereas with the conversation voice AI model, we’re able to consistently give polite, friendly customer service—a warm, human-like interaction. We’re actually able to bring in voices that are neural voices, which are so human-like, most people wouldn’t even know that they’re talking to an AI. We’re able to offer it in 96 languages and variants, which means that you are able to serve a wider community within the area as well, without any order inaccuracies of mishearing something, or asking them to repeat themselves. Language is another really big factor, both in-store and at the drive-through.

Stevan Dragas: Salwa, if I may add, it increases—

Salwa Al-Tahan: Of course, please do.

Stevan Dragas: From working very closely with Sodaclick, it also increases greatly the accuracy. It removes the necessity from the operator to necessarily closely listen and try to understand, but the same time reduces the time to delivery, because the moment when you are already having three, four articles listed on the screen, the operator can start already making the order, working on the product, rather than listening for the complete order to be finished. Because the technology is now stepping between, helping both sides.

Salwa Al-Tahan: Absolutely, it’s streamlining operations, both for the business and for the customer. So you’re absolutely right, Stevan, that it’s a benefit to both. And also alleviating that pressure on members of staff as well. So it can all, like you say, it can be stressful as well, inputting all of that information. And although it is repetitive, it can be stressful, especially if you’ve got a large queue. People honking their horns and—they just want their food fast, and that’s what it is all about in the QSR industry, getting your food fast.

So by being able to improve order accuracy, it has that knock-on effect of the other benefits: streamlining it for both the business and the customer, but also increasing speed of service and quality of service as well. And it allows members of staff that, from being taken away from that position at the order point, we’re not actually removing a member of staff, we’re just repurposing them into the kitchen so that they can focus, exactly like you say, on preparing the orders, on other pressing tasks that might be needed in-store. And also improving the quality of customer service.

Christina Cardoza: Yeah. As a customer myself—I guess in preparation to this conversation a little bit, I went through a drive-through, quick service restaurant last night. And I used the app before I left the house and ordered my food, and then went through the drive-through to do it, but I wanted a sandwich with pickles on it and, like you said, I didn’t want to go through the app and figure out how to add pickles to it, but then I also didn’t want to drive through and talk to an employee, because then—just my own thing—I feel embarrassed, or that I’m being a difficult customer, asking for these modifications and customization. So if it was an AI I was talking to, I would’ve been much more comfortable to order the sandwich that I wanted.

And, to your point, it’s that customer experience, but I’m curious—you talked a little bit on the business level, the benefits that they get and how they can redistribute their employees elsewhere. How can we actually implement this voice AI? What is Sodaclick doing to add this on to, maybe, the technology that’s already in there? Or is there investments that have to be made to the infrastructure to start bringing voice AI to the business and to the customers?

Salwa Al-Tahan: So, actually, if a QSR doesn’t already have this—the technology—already, we can work with them and integrate into their existing technology.

Stevan Dragas: So, if I may add to that existing point: customer-interaction points, where they either interact or make purchase orders in existing stores, or even if it’s drive-through, what Sodaclick from the technology side brings is the microphone, which is a cone microphone, which focuses in very noisy environments to the person. And it’s actually doing that with the new algorithms developed with Sodaclick, driving very, very high percent of the accuracy. But not only accuracy of the person, but also recognition on the accents and different words. In the same environment, there could be multiple languages.

From the technology side, they also integrate with the APIs, with the stock of the products, directly integrate with the products available—but not only available, they integrate with the analyzing of existing products. For instance, are they protein-rich? Are they rich in some other minerals or whatever? Again, specifically now talking about QSRs. And from the technical side also, they look into what the existing infrastructure is. Maybe the existing infrastructure is enough. Or maybe they need, so-called, a little bit more horsepower, in which case just the computing part needs to be up-leveled to be able to process all the information in order to drive this near real-time conversational kind of usage model.

Christina Cardoza: Great. And, Stevan, you mentioned some of the Intel technologies that are coming out to help do this more. Because I’m assuming, like you mentioned, there’s a microphone, then we have cameras, there’s algorithms all happening at the backend to make sure that the software can accurately understand what the customer is saying and be able to put that all down and get their order right. So, what are some of the technological advancements coming that make this so that it’s fast, it’s accurate, it’s real time, that it’s natural? How is Intel technology making this all possible and helping companies like Sodaclick bring this to market?

Stevan Dragas: So, there are a couple of things that actually directly play on the technology side. One is effectively physics. In order to drive real-time or near real-life-experience conversational experience of the users, of the customers, decision-making needs to be done at the edge. Processing, running of those LLM models need to be done at the source—at the source, which is the edge-integration point, the communication point.

And Intel has recently introduced—and this is an industry first—new processors which have now three cores. And they are all in the same chip, which is basically CPU as it traditionally was, GPU on top of it, and then NPU. NPU is effectively narrow processing unit, which effectively enables AI decision-making being done at the core, at the edge.

So, the Core UItra platform products are something that are coming out. There are already a number of them available in the market, but they will be even more widespread with driving this AI user experience, conversational AI. On the other hand, there are a number of products which are for the cloud, for the edge, for the server, but ultimately when I said physics, you literally have latency between transmitting data from the point where you make the order, where you conversate, and you don’t really have time.

I don’t know if others are like myself; I am not very patient. Sal, you are laughing because you know me. But ultimately, if you need to say something and then wait for a couple of seconds for that message to be transferred to the data center, or to the cloud, or somewhere away, and then the response needs to come back—normally I go without lunch if there is a queue in the line. But that simply may be me.

But ultimately if you want to have a conversation, conversational AI, that needs to be real, as long as that processing is happening at the edge, and this is what Intel is bringing—bringing not only products, but also as Intel® Tiber edge platform, then OpenVINO framework, which Sodaclick is using. So ultimately not necessarily just doing the technology for the sake of technology, but using the technology to enable usage models, to enable experience, to drive the smile, to drive the repeat return to the either same environment or the similar environments, to literally break out of the box of traditional “read the menu and repeat what is said.” Or if you don’t read, I don’t understand. So this is where basically Sodaclick is coming in with their software solution.

Salwa Al-Tahan: Just like Stevan was mentioning, I think what a lot of brands were doing at the drive-through order point was reducing their menus. But with conversational voice AI you can actually still have that full menu and have your customers interact with that and choose even maybe new favorites, with the opportunities for upsells. And it’s a lot more intuitive as well. And, like Stevan was saying, using OpenVINO, it means that we’re able to create the solution and then scale it across the brand.

Stevan Dragas: Even to add to that, when I mentioned a couple of times all the user experience, imagine if you are basically a return customer. And maybe there is a loyalty program, maybe, I don’t know, some special. And imagine you come back there, and rather than having to go through your three, four, five items, how about the sign says, “Hey, welcome back, Christina!” All clearly because you either tapped your card, so it knows who you are, and it says, “Hey Christina, shall we have the same, like your favorites?” Or something like that.

So it automatically, even for you—oh yeah, I don’t need to go through the pain of repeating everything. It already knows, and it suggests, and as Sal mentioned, maybe it can actually focus on maybe upsell. Say, “Hey, how about would you like to try some new product? Do you want to experiment?” Or ultimately there is even the option of detecting the facial expression and pretty much trying to drive: a happy customer is a good customer, is ultimately buying more.

Christina Cardoza: Yeah, absolutely. To your point too, if the machine can recognize who you are and what your order has been, and there was maybe a limited-time offer or a new menu item that came out that is similar to what you ordered, they can also give those personalized recommendations: “Would you like to try this?” So this all sounds really great and interesting.

Salwa, I’m curious, do you have any customer examples or use cases of this actually in action that you can share with us?

Salwa Al-Tahan: Yeah, absolutely. So, we’ve been working with Oliver’s, which is an Australian brand. They’ve got over 200 stores, both in-stores and drive-throughs. And we’ve deployed the conversation voice AI in their in-store kiosks and also at the drive-through. It’s actually been really, really exciting working with Oliver’s, because they were on a completely new digital transformation journey. So we’ve been with them along that way, including their digital signage.

And what was really cool about Oliver’s is, although it’s English, we’ve been able to create the persona of the AI assistant to be very Australian. And he’s got his own personality: he’s called Ollie. And he understands Australian, the slang words; he’ll sort of greet you with “G’day, mate!” and “Cheers!” And just in a very natural way to the local customers. And that’s been really, really cool.

The other great thing about working with Oliver’s was their requirements, their KPI, was quite different to working with, let’s say, KFC, who we also work with. They actually, because they are a healthy fast food chain, they know their customers are interested to know ingredients lists; they want to know calorie count and product—like Stevan mentioned—protein information and things like that. So we were able to integrate with Prep It, which is their nutritional database, to provide that information for customers in a very quick, accurate, and fast manner. And that’s something else that’s—it’s really cool.

Again, I mentioned, so working with KFC. We’ve been working with KFC in the MENA region, specifically in more locations to come. But we’ve got deployments in Pakistan and Saudi and across the UAE in the different languages. And their requirements were different. They were more focused on speed of service and improving order accuracy. And, again, with conversational voice AI at the drive-through, we were able to achieve that for them. And it’s going well.

Stevan Dragas: So Sal, I don’t know if that’s a public—well, technically, it’s not—but it’s also looking into where else—how to expand. And ultimately it’s not just the QSR industry, but it’s every place where there is need for either information, either core communication or any discussion, any Q&A. Like we at the moment are working together with one of the world’s largest football clubs, where effectively we started conversational, which very quickly got very positive reception on all the different touch points where a conversational AI or Sodaclick solution can be integrated—from entering the venue to integrating in either restaurants or museums, trying to be very sensitive to the name of the place. There are multiple adjacents in vertical industry opportunities where conversational should be and could be; they can do a lot more natural level.

Salwa Al-Tahan: Absolutely. It’s all about engaging users and creating really positive interactions—memorable interactions actually as well. And I think we’re in an age where everyone has such high expectations. They want hyper-personalization, they want interactive experiences. And it’s almost a case of businesses trying to think, “How can I keep up? What innovation can I bring in?” And conversational voice AI is something that is not just a trend; it actually has a use and benefits as well. But it is part of the trend. It is quite hot at the moment. So, yeah.

Christina Cardoza: Yeah, absolutely. And that was going to be my next question. Because I know in the past we worked together— and Sodaclick—and we’ve done an article about conversational AI. But it was in a healthcare space: being able to collect information and do things that maybe a receptionist would have done at a patient level so that the doctor could get the information faster—the patient doesn’t have to wait online, anything like that. So I was curious, from your perspective, Salwa, what other opportunities or use cases outside of the QSR, or what other industries do you see voice AI coming to?

Salwa Al-Tahan: Absolutely. So, other than healthcare, I think definitely wayfinding kiosks—airport concierge, for instance. The benefits are that you can have the conversation AI assistant on a kiosk 24 hours a day; you don’t need to have a member of staff manning that. A customer can come in, or a user can come in, and interact with it.

Even if you think about government buildings—anywhere where there’s a check-in, just like Stevan was saying, anywhere where you might need to ask a question, or get information from—at stadiums, where simple things like reducing queue times by having these interactive touch points where a customer can come in, scan their ticket, ask it where they can get some food from, or where directions to where their seat is, or—all of that information. In an airport, asking where the bathrooms are. Or, again, where they can get a coffee from, or they forgot their headphones—where can they buy some headphones from in a busy airport. This is really useful.

And I think there could be some even more exciting opportunities outside of these ones which we haven’t explored yet. It may be in FinTech as well. And I think it’s just a case of reaching out to and sort of seeing wherever there is a need for these personalized interactions for customers to use these too. And also, part of it is providing more of an inclusive world. Again, I go back and say this, but I think it is partly providing a more inclusive world with voice AI as an additional option to touch. So there’s plenty of opportunities to integrate very seamlessly and very—it all needs to be done very frictionlessly.

Stevan Dragas: So, Sal, I’m sure you will agree, because of the previous discussions. It’s interesting to see how some technologies, even looking outside, how long does it take for certain products, technologies, experiences to actually penetrate? We have a number of examples of, let’s say, certain technologies taking X number of years to reach, let’s say, 10 million subscribers. But then as we are going more and more with something which is more, as Salwa mentioned, inclusive, is more natural, how that timeline actually shortens.

And I think with the conversational AI, almost like at some interception point, where effectively we need to drive people to see and experience it. The moment you learn it—if I just look at I still am having difficulties with teaching my mom how to open WhatsApp on the tablet. But at the same time, when my youngest daughter was born, she was not even a year, she already took the phone and she already knew how to move and how to touch and how—to the level that, effectively, once we get exposed to certain usage model experience or technology, then it almost becomes natural.

For my daughter, the interaction, the touch screen, is the starting point for her. While for my mom, it’s still like some alien technology. So the moment you experience something, you kind of demand that from other usage models. Think about, where else do you stand? You stand in front of every hotel when you go to check in, stand in the line and wait for—all of that could be actually done through the simple kiosk where, effectively, “Hey, this is me.” Passport check-ins at the—you can see at the airports. There is a lot more now of those self-check-in lanes, where effectively you don’t need to queue; you can just go through.

So if we start from QSR, moving to retail, expand to hospitality, healthcare—ultimately any vertical industry where there is any need for either conversation or information-sharing. Sal, you mentioned way-finding. Way-finding was great as an innovative kind of usage model. However, if you suddenly need to figure out and touch, and the accuracy of the touch, it can sometimes—if you need to stand in the queue and try to—and you need to know what are you looking for, it takes so long to type in. Rather than just say, “Hey, where can I find a coffee place? Where can I find. . . .”

So suddenly we are not transforming the technology; we’re just bringing a new usage model to the existing technology. And that is actually—which can actually make those products, those usage models, and those vertical industries adopt certain technologies much faster. And I think we are really at the kind of crossroads of these technologies, that once people get exposed to, but ultimately once people get exposed with certain usage models across certain points, they will expect the same, similar, or even better experience across other adjacent industries. And I think it’s just the beginning of the AI, and we are going to certainly see a big boom in these usage models and experiences.

Christina Cardoza: Yeah, that pain point with the parents being able to use technology—that is something that resonates deeply with me. But to your point, the touchscreen and all these devices and these applications, that is something that maybe my generation grew up with, but not my parents’ generation. Conversation, voice, talking—that is something that we all have been doing since we were born, since we can walk; it’s very natural to us. So being able to implement these across these different industries—they’re high technological advancements and innovations, but it’s a much better user experience, and much more accessible to people than a touchscreen or a kiosk. So I think it’s great, and I can’t wait to see what else comes out of all of this.

I know we are running a little bit out of time, so before we go I just want to throw it back to you. Any final thoughts or key takeaways you wanted to leave us with today? Salwa, I’ll start with you.

Salwa Al-Tahan: I think actually, just picking up on what both you and Stevan were saying, I think we’re definitely in the golden age of AI and technology. And it’s not something that we’re talking about anymore that’s in the near-distant future; it’s here, it’s now. It’s deployable, and it’s very natural, because again, like you say, we’ve all been conversing since we were babies. And with the advent of phones and everyone using Alexa and Siri in our homes and on our phones, it’s just the natural progression.

And because of the benefits that it has across industries, not just in the QSR, it’s just something that we will be seeing more of. And it almost, again, like Stevan was saying, it’s almost a case of when one brand leads with it, the others will follow, because they will all see how much it is improving their business, improving their customer experiences. It’s bringing a higher ROI to them. So it’s very much here and now. And it’s very exciting, actually, to be a part of this. So, yeah, there’s definitely a lot coming.

And, again, I just wanted to—for anyone who does have this misconception that voice-AI systems are going to take away jobs and things like that—I just really want to, again, reassure them that it’s not about taking away jobs, but rather augmenting and helping both the businesses and the customers by streamlining the operations to meet those customer expectations of faster, intuitive experiences. And we can do that with conversation and just by repurposing members of staff. So it is never about taking away a human person’s role, but rather giving them purpose somewhere else.

Stevan Dragas: Yeah. And to that point, what I would like people to remember is not to do technology for the sake of technology but because of what it can bring, what it can enable, what it can drive. In Intel, there is an already long saying: “It’s not what we make, it’s what we enable.” And this is one thing that is becoming prevalent and very important going forward. Demand more. Technology is there. Innovation is unstoppable.

And I think from the conversational AI where we started, where we are going now—it’s just beginning, it’s just tip of the iceberg. There is so much more that if you connect the conversational AI—but ultimately if you connect it on the basic principles of what Intel is doing, which is security on every product, connectivity, manageability, so as long as all of that infrastructure, those applications are safe, are manageable, are connected, are something that is also driving sustainability—ultimately all of these connections and all of these technology points that people integrate, collaborate, talk to, integrate—ultimately all of this can be actually driven in a lot more sustainable way across many vertical industries. And this is just the beginning for Sodaclick, in my personal view.

Salwa Al-Tahan: Absolutely. I mean, all of these core values resonate with Sodaclick’s values as well. And we can pass those benefits on to the customer as well. So, like you say, it’s just the beginning, but it’s definitely very exciting.

Christina Cardoza: Absolutely. I can’t wait to see what else Sodaclick does with Intel. So I just want to thank you both again for joining the conversation and for the insightful thoughts. And I invite our listeners to visit Sodaclick, visit Intel; see what they can do for you and how they can help you guys enhance your businesses. So, thank you guys again. Until next time, this has been the Talk.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

About the Host

Christina Cardoza is an Editorial Director for Previously, she was the News Editor of the software development magazine SD Times and IT operations online publication ITOps Times. She received her bachelor’s degree in journalism from Stony Brook University, and has been writing about software development and technology throughout her entire career.

Profile Photo of Christina Cardoza