AI and edge computing are taking IoT to the next level, with new and exciting use cases expected over the next year and beyond. But as the rapid pace of advancements and investments unfolds, it raises the question: Are we accelerating too fast?
In this podcast, we dive into the forefront of technological evolution, unveiling CCS Insight’s top IoT and Edge AI predictions for 2024. We also navigate through responsible development of solutions and explore the impact AI is set to make in the coming years.
Our Guests: CCS Insight
Our guests this episode are Martin Garner, Head of IoT Research, and Bola Rotibi, Chief of Enterprise Research, at CCS Insight. Martin has been with CCS Insight for more than 14 years, specializing in the Internet of Things. Bola has been with the analyst firm for more than four years, researching the software development space.
Martin and Bola answer our questions about:
- (1:33) How the rise in edge AI impacts IoT
- (4:05) If the cloud versus edge debate will continue
- (7:36) What the future of AI development looks like
- (11:20) The reality of generative AI
- (15:08) Business considerations for AI investments
- (17:55) Upcoming AI regulations and implications
- (21:25) Developing AI with privacy in mind
- (25:43) Where 5G fits into AI predictions
To learn more about edge AI trend predictions, read AI and Beyond: Forecasting the Future of IoT Edge AI and the complete research paper Edge Computing and IoT Predictions for 2024 and Beyond. For the latest innovations from CCS Insight, follow them on Twitter at @ccsinsight and on LinkedIn.
Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech. And it’s that time of year again where we’re going to look at the top IoT and AI predictions for 2024. And joining us again on this topic we have Martin Garner and Bola Rotibi from CCS Insight.
So for those of you who aren’t familiar or didn’t get to hear our last podcast with these two experts, let’s get to know them a little bit before we jump into the conversation. Bola, I’ll start with you. What can you tell us about what you do at CCS Insight?
Bola Rotibi: I’m the Chief of Enterprise Research at CCS Insight. I have a team that covers software development, workplace transformation, cloud, AI of course, security—quite a whole load, but everything to do with the enterprise.
Christina Cardoza: Great. And Martin, welcome back to the show. What can you tell us about CCS Insight and yourself?
Martin Garner: Thank you, Christina. Well, yes, I’ve been at CCS Insight for 14 years. I lead the work we do in IoT, mostly focusing on the industrial side, and so work very closely with Bola on those areas.
Christina Cardoza: Great. And, Martin, since you’ve been at the company for, like you said, 14 years now, we’ve worked a bit over the last couple of years. You guys always have an IoT prediction you put out at the end of the year, and we’ve always worked to talk about that and to see what’s coming next.
So I’m wondering, in the last year or even the last couple of years there’s been a big push to edge and AI. So I’m wondering how that impacted this year’s predictions and what we see coming for 2024.
Martin Garner: Sure. Well, obviously 2023 was a huge year for both of those areas. Edge was already growing strongly coming into this year, and I think 13 months ago ChatGPT had just launched and has clearly had a massive impact during this year and has really opened the world’s eyes to what AI can do—and quite a few things that it can’t do yet. So I think it’s fair to say that the world is now thinking rather differently about the two areas, and we are too.
So in our predictions last year we had lots about edge and some AI. I think for anyone who downloads the booklets at the end of the podcast you’ll see that we really dialed up the AI predictions this year. And it’s important: it’s not just generative AI like ChatGPT, and it’s not just what individuals can do with AI.
And there’s one prediction I’d like just to give as a little example of that, which is that by 2028 a major healthcare provider offers its customers a digital twin service that proactively monitors their health. Now today there is a lot of data about you scattered around—so we have online health records in many countries; we have fitness bands; we have smart watches and they show activities, sleep, and so on. There’s lots of data on your phone, but it’s all a bit messy and it’s not joined up at all.
So we think that healthcare providers over three to four years will start to combine these sources. We’ll use AI and we’ll basically do what the industrial guys are already doing, which is predictive maintenance on people. And of course the aim is early intervention gives smaller intervention, and it’s usually cheaper intervention too. The outcomes are just better if you get there earlier. So that’s an example of how we think AI will start to develop from here.
Christina Cardoza: Yeah. And I love what you said—we’ve learned a lot about what we can actually do with AI, or what’s possible. There’s always that hype or that trend that everybody wants to jump on right away. So I want to dig a little bit into that.
But before we get there, just this movement to—with AI and this movement to edge and having, wanting to get information at real time and faster—there’s been a lot of movement also to the cloud with all of these solutions and applications. And I know there’s always that cloud-edge versus on-premise debate. Will that continue? Or will people continue to move to the cloud over the next couple of years, or is on-premise still going to remain steady?
Martin Garner: Well, it’s a good question, and I think that debate is going to remain live for a good few years. And I want to draw out just a couple of ways of that. So one is quite short term. I think in many countries there are fears about where the economy’s going. We’re not quite out of all the pandemic effects yet. So one prediction is that recession fears push workloads from the cloud to on-premises through 2024.
Now, the best candidates for that are companies which are using hybrid cloud. It’s not so much those who’ve gone all-in on the cloud. And there are a few new initiatives, like HPE GreenLake and Dell APEX, which bring the flexible consumption model that we have in the cloud—they bring it to hardware and allow you to do that at the edge as well. So it may be that a hardware refresh in some of those companies is a good opportunity to rationalize some aspects, sort of repatriate some of their workloads to on-premises and take some cost savings while they’re doing it. So that’s the short term.
Longer term, we think there are several areas where there are sort of pendulum swings, like insourcing versus outsourcing, and edge to cloud could be one of those. And it’s partly also that there are new tools and different economics coming up in both areas. So it’s all changing quite fast.
But one area where we have another prediction is that a repricing of cloud providers’ edge computing services takes place by the end of 2028. Now, what does that mean really? Well, the big cloud providers, they all have edge computing services, but actually the public cloud is the big part of the business. And the edge services, they’re sort of on-ramp for the public cloud, and they’re priced to do that. But the forecasts for edge computing, they show it growing bigger than the public cloud over five years, possibly a lot bigger than the public cloud.
Now, if that happens, then we’d be in a position where the big part is subsidized by the smaller part, and that makes no sense at all. So we really expect the edge-cloud services, their prices to be moved upwards over three to five years, and that could be a significant price rise. So we think many industrial companies might want to consider their own edge computing, just in case—put themselves in a better position so they have more options. It’s edge as a hedge, if you like, and kind of gives them more options about how they go forwards.
Christina Cardoza: Yeah, absolutely. And that’s a great point. Companies, they don’t want to be locked in with a vendor. Like you said, more opportunities. They want to make sure any investments they make—because they do make a lot of investments to move to these solutions to build these—they want to make sure any investments they do make they can future-proof them. I think a lot of it driving it too is the type of solution that they’re building. If it’s more critical workload or they need more security, they tend to move on premise. If they need real-time analytics or they need faster results, that’s sometimes the cloud and the—where the edge computing comes in.
So I want to talk a little bit about AI development with you, Bola. What can we expect in terms of the AI solutions or the developments happening over the next year within enterprises?
Bola Rotibi: Well, I think—I mean it’s a good question actually, because I think we’re going to expect quite a lot, and I think, whilst 2023 has been kind of a year of—we’ve seen a year of launches, especially from many of the IT solution providers, there’s been a wealth of new tools. I mean, obviously when ChatGPT launched last in the end of November 2022, it really spawned massive interest, and we’ve seen it exponentially grow, especially with generative AI.
But I would say that the development of AI has actually been happening for quite some time, because it’s been machine learning and various other sort of models and algorithms that people have used behind the scenes. So anything that you’ve been using on your mobile phone to search through pictures and all those, those are kind of like AI solutions, AI models. So this AI itself is not new, right?
But what we are seeing is the power of generative AI, especially as we think about the productivity. You know, everyone has sort of latched on to the conversational aspect of generative AI and its ability to simplify even sort of complex queries and bring back concise information that is relevant information. So everyone’s jumping on that because we see that as a productivity solution.
So what we’re seeing now is a launch of lots and lots of solutions. Over the last year we’ve seen pretty much every provider—Intel amongst those as well—bringing out generative AI capabilities, as well as beefing up their own AI solutions. But one of the things that we’ve seen here, we’ve seen Microsoft launch with the Copilot AI-powered assistance. We’ve seen others from other providers like AWS with Amazon Q.
But I think what we’re—what one of the things that we do think is that despite all this frothiness around the interest, we did have a prediction which actually said AI investment and development will accelerate in 2024, it does accelerate in 2024—despite calls for caution. And we’re already seeing the throughput of this already. And the reason we see this is that actually quite a few of the main protagonists, I mean over the recent months have said, “Well, hold on just a minute. We need to kind of slow this down. Things are moving rather fast.” And people are a bit worried about security; they’re worried about whether the regulations are out there, whether they are effective enough. And all of this caution is still kind of wrapped around.
But at the same time I think there’s a real thirst to get AI, to develop it, to really get their hands on it. Because people have been blown away by the new experiences and the engagement levels that they can have, especially with generative AI. So I think there’s going to be even more acceleration next year.
Christina Cardoza: Yeah, absolutely. And a lot to dig in there. I want to start off with the generative AI piece. And, like you mentioned, the possibilities—it brings so many benefits that a lot of companies are trying to jump on this, and I always hear in the back of my head, one of our guests on the podcast was saying, “You want to create solutions that solve a problem, not create solutions just to create a solution, just to jump on the latest trend.”
So I’m curious with generative AI, because obviously, we’re seeing all of these companies talk about it—all of these releases and with AI toolkits adding generative AI capabilities, there’s been a lot of hype. What’s the reality? What do you think will actually come out of this space in the next year, and what can we be looking for? What are the applications or use cases that are possible?
Bola Rotibi: Oh gosh, that’s lots and lots of questions. Well, first of all I think there is a lot of froth around with generative AI, but that’s because everyone’s been, as I said earlier, been blown away by the productivity that they’ve seen from it and actually the intuitiveness of it all, and just really being able to talk or put together natural language queries and then get back concise information.
That said, I mean, one of our predictions that we outlined is that despite all of the excitement around generative AI and the fact that we’ll see lots and lots of new tools, we do think that, 2024, we’ll see some level of slowdown—partly because as people kind of get to the grips with the reality of the costs, some of the risks and the complexities that really, that become, start to be exposed and have been starting to be exposed this year, I think that we’ll see a slow, a slight slowdown.
But, despite that, I mean I’ve—it’s a bit of a thing where you’re sort of saying on one hand it’s going to be really, really great and really fast. On the other hand we’re going to see some slowdown. What I think, as with any technology, there will be people just trying all sorts of capabilities.
Nowadays you can go in and you can start typing in queries, and then immediately get very, very concise answers back. And what we’re also seeing is that we’re seeing generative AI in multiple perspectives. So we’re seeing it as an assistant to providing information to relevant information across lots and lots of different data types. We’ve also seen it being able to create, generate code that is very good quality—that at least is a starting point, especially the experts to finalize it and sort of do a little bit more quality control.
So I think we’re going to see—so on one side I think 2024 is really where people get to start to play with it properly. And the tools, they’ll get to start to understand some of the limitations of the tools, as well as what they’re going to be able to achieve. So I think it’s going to be, on one hand the hype of 2023 will start tempering down into much more of a more level-headed sort of approach to it and more nuanced, as well as the excitement of actually delving into some of the capabilities, like generated code. We’ll start seeing it across different types of workplace solutions, helping knowledge workers, but also helping expert professionals—whether those be expert developers or other professions as well.
Christina Cardoza: Yeah, it’s interesting because we may see a slowdown in some of the hype or everybody jumping onto this trend. But, like you mentioned, we’re going to see an increase in acceleration in AI investments maybe in other areas. And, like you mentioned, that we’re going to see an acceleration despite the need to maybe take a slowdown to take a step back and look how we’re developing things.
And, Martin, I know we’ve spoken in conversations outside of the podcast that AI oversight committees, this might be something that is coming. So can you talk a little bit more about what that means? How can we develop, or how do you anticipate ethical AI initiatives to spring up as this investment and this acceleration continues?
Martin Garner: Well, yes, of course. And you’re right, we touched on it before. It’s a big subject and we could talk all day about that. We’re not going to. So the short answer is, yes, there’s going to be a lot more of that. And I think you already said it, Christina, that AI does have the potential for many, many good uses in society. But also used wrongly it has the potential to do a huge amount of damage, and it’s a bit like medicine, where regulated drugs are generally good for society and unregulated drugs like opioids and fake Wegovy and things, generally not so good.
And the big difference with medicine is that there’s no professional body, there’s no Hippocratic oath. You can’t be struck off as an AI practitioner, at least not yet. And so at the moment instead we have the opposite, where the AI leading companies, they seem to be on a big push. As soon as something new is developed they open source it and push it out into the world as fast as possible. And that obviously puts a huge imperative on suppliers and developers to take their own ethical stance in how they use it—which customers are they going to deal with or not deal with, how to train their staff, internal governments. There’s lots to get right there. But it also puts an onus on companies who are using AI as customers, and they need to step up too.
And so we do have a prediction, which is that AI oversight committees become commonplace in large organizations in this coming year, 2024. And those are likely to be committees of ethics experts, AI experts, legal advisors, data scientists, HR, representative from the different business units, and so on. And they’re going to have to review the use of AI across the company and in the company’s products. And their job really is to bridge the gap between the tech teams who are all engineers—they’re not typically ethicists—and the organization and its goals and what it wants to do with it.
And we think that’s going to be quite a significant overhead for a lot of companies. Quite difficult to get right. Lots of training to come up to speed and to stay on top of it, because it’s all moving so fast that the committee will have to move fast too. And all that because the AI industry is largely not doing a good job of self-regulation.
Christina Cardoza: One thing I’m curious about is the EU is putting together the AI Act, and I’m wondering what will this mean for developing solutions with the GDPR? You know that that was in the EU, but that was a global impact. So do you anticipate the same thing happening when the AI Act has passed?
Martin Garner: I think probably Bola is the better one to talk on that, if I may. Can I hand that one to Bola? I know you’ve been looking—
Bola Rotibi: Yeah, no. Yes, I think definitely actually, because at the end of the day I do think that the regulators are coming together. The EU has been first out the door for starters with the EU AI Act. But when the act does come in, it will be like the GDPR Act. And I think what will happen, and we’ve already seen the ratification of the Digital Markets Act, and we’ve already seen the effects of that with some of the companies who are being highlighted are still trying to get bedded in into whether they are the main gateways.
But I think when it finally does come through, I think there will be sort of a bedding-in process as people try to get used to it, try to understand what it means, what all the constructs, see all of the rules and regulations. So there will be kind of like teething problems, but I think it will actually, it will become a thing, a regulation for people to rally around.
But the EU is not the only one. We’ve got the acts in the US, and we’ve also got the UK pushing to having a really strong play with AI regulations. And then we’ve got China and in other regions as well. So I think we’re going to start seeing some level of improvement in terms of—towards the end of 2024, certainly, as people rush out to see the frameworks regulation. And I think that will be really important.
The other thing that’s actually happening, and it’s not just about regulation at the international level and the national level, but it’s also what the industry is doing. And I think there’s some really exciting things that have been happening. Recently IBM and Meta and Intel amongst—as part of 50 organizations that have launched the AI Alliance, which is aimed to bring the industry together to work on collectively—like any other body—to standardize, like working groups, to bring working groups together, to come up with ideas for strategies and approaches to handling certain AI challenges and opportunities, to be the home or the hub for interactions between end users as well.
Christina Cardoza: I love seeing all of these AI efforts, especially from big tech companies, because AI has a lot of benefits, it has a lot of opportunities and ability to transform industries and transform our lives for the better. But there is that level of uncomfortability from end users—how their privacy is going to be used, how secure the data is, what’s it going to do, how it really is going to transform their lives. So I think this helps put a safeguard and minimize some of the risks that people think that are out there.
And I’m curious—because obviously we can’t wait for some of these acts or some of these efforts to be passed or to take off before we start developing in a more ethical way—so I’m just curious, Bola, you work with developers. How can they continue to develop these AI solutions with privacy and ethics in mind and ensure the safety and security of their solutions and of the end users?
Bola Rotibi: Well, I mean it’s a good one actually, because I think invariably what’s going to happen is that I think lots and lots of—how to say—there’s likely to be oversight committees within organizations. That’s something that we’ve actually put down in the prediction of ours. But I think one of the things is that there will be communities that will work within organizations to kind of understand what it is that they can do. Many of the tools that they are actually using now and many of the providers are building in that responsible AI, ethical AI, from the ground up.
What developers have to then start thinking about—and this isn’t just on the developers, because actually at the end of the day a developer might think, “Well, actually, I’m building code, I’m generating—I’m building a code, I’m writing an application.” But at the same time, in the same way, that security is actually down to everyone. Everyone in that workflow. So is responsibility—responsible AI and an ethical approach. It’s down to everyone.
So I don’t see it’s just a developer requirement, but at the same time you have to say there needs to be frameworks in place, and that needs to be driven around the whole organization. It needs to come from the ground up. It needs to go from the top down. So there needs to be some principles that are used and distributed and circulated across the organization. And there needs to be some sort of guidelines.
Now, the tools are coming with guidelines and guardrails, right? In terms of, “Okay, how do I prevent certain things happening?” Because you can’t expect anyone who’s always developing to have everything in their heads about, “Oh, okay, is this ethical?” Of course, you could ask yourself, “Well, just because I can do it, should I do it?” You know? And that’s always the case.
But, at the same time, if you want a level of consistency, we need to have, provide, those guidelines and to the organization, to the development organization, right across the board. But there needs to be guardrails. And I think many of the tools are recognizing this so that they can actually allow organizations of any size to put their own policies in.
So I see going forward that there will be a layered approach. So that may be an oversight committee within the organization that thinks about what it is that the organization is from an ethical standpoint, from a responsibility standpoint, and start building policies. And those policies will be driven and put into the tools so that they act as guardrails. But there’s also going to be guidance and training of developers in terms of taking an ethical approach, taking a responsible-AI approach.
So I think, going forward, it’s a bit like how do you do right, knowing right from wrong? But this is something that the development community already is aware of. There’s a lot of things out there, a lot of programs, a lot of initiatives out there, things like aiming for doing code for good. Lots of organizations have been thinking about impact, about sustainability, and all those kind of things. So there is a body, sort of a body, a wealth of already ideas and initiatives to make the people think at multiple levels, not just about responsible AI, but doing the right thing, thinking about sustainability. So I think the approach is actually is already there for many developers, but they do need help.
Christina Cardoza: Yeah, and I love what you said: it’s not—one camp is not responsible for this. It’s really a collaborative effort from everybody building it, everybody using these types of solutions, everybody touching AI. So I love that.
We are running a little bit out of time, but before we go I just want to change the conversation to a different topic real quickly while I have Martin on the podcast, because Martin, we’ve talked over the last couple of years: We’ve been talking about 5G. Are we looking to 6G? Where are we in the 5G efforts? Is it still beginning? So I’m just curious, as we’re talking about AI and Edge, where 5G fits into all of this, what the impact of AI is going to be on f 5G networks, and when is it time to start looking at 6G?
Martin Garner: Yeah, 6G, I love that. We’re not, we don’t quite have all the good bits of 5G yet, do we? They’re coming, but they’re not quite here yet. So—but there is work going on 6G.
So in terms of AI and 5G I think the first thing is that organizations who are starting to use 5G in their factory, in their warehouse, in private networks up and down the country and so on—one of the things 5G will do is enable a lot more use of AI thanks to the very high capacity, the time-sensitive networking location services it will bring in. And we’ll see a lot more AI in use around those domains. A lot more autonomous vehicles and things. We can already see good examples of autonomous trucks used in mines in Latin America and in ports in many different continents and so on. Lots and lots more of that to come with 5G and the newer bits of 5G, which are nearly here, as one of the key sort of enablers of that.
But I think the other interesting bit is the impact of AI on the network itself. Now, there are several tricky aspects if you try to buy and use 5G. So things like coverage planning on public and private networks. Very good candidate for using AI to make that simpler so that more people can do it. Also, 5G networks, they’re complicated things. They really are a very high number of settings and things. And so the whole optimization and management is a big deal in a 5G network.
And we have a prediction around that, which is that AI enables 5G networks to move beyond five-nines availability, which anyone who’s used a cellular network will appreciate the importance of that. And that would come through by analyzing traffic patterns and ensuring that the network is set up best to handle that type of traffic, identify problems, predictive maintenance, and configure the network—if things are going to go wrong—so it has graceful degradation or even become self-healing.
And we think there’s a lot more that networks themselves can do with AI to become much better quality and really support not only people up and down the country on the public networks, but also really the OT world—which absolutely, if they use it a lot, they will depend on it, and it’ll be very expensive when things go wrong. So supporting that requires that degree of quality.
Anyway, then 6G. It is a tiny bit early for 6G, but work is going on of course. And over the next five years or so we’re going to be building 6G networks. We think 2030 is going to be a bit of a headline year for 6G. And we have a few 6G predictions. Here’s one: which is that by 2030 the first 6G-powered massive twin city is announced. And we think that cities will be a great showcase and massive twinning is one of the best use cases, because all the layers of a city could be potentially included in the model there. And the 6G network could enable a sort of full representation of the whole city environment.
We don’t think that’s likely to start with older cities—much more with a new project such as they’re doing in Saudi Arabia at the moment. And if we do that, they’re going to need 6G just for the sheer volume and speed of the data in real time that runs through a city. So that was a really good example. We think 2030, big headline year for that.
Christina Cardoza: Yeah, absolutely. And I agree, we still have to fix or finish the 5G aspects that we have there, but of course you have businesses or companies out there already trying to talk about 6G. And I agree that for all of these AI solutions and benefits and opportunities we want to take advantage of we need to make sure the infrastructure is there to make it possible first.
Martin Garner: Yeah, and working as well as it can.
Christina Cardoza: Yeah, absolutely. So this has been a very big conversation. Obviously there’s lots to look forward to. And we have only touched a little bit of the research paper, the CCS Insight IOT predictions that come out every year. We’ve only touched a small subset and this has already been such a big conversation.
So before we go I just want to throw it back to you guys. Are there any final thoughts or key takeaways that you want to leave our listeners with today? Bola, I’ll start with you.
Bola Rotibi: Well, yes, I think the one thing that I do feel that is going to become even more prominent, if people haven’t already identified it yet, which is—and we have a wonderful prediction about this—which is that proficiency in generative AI is a common feature of job adverts for knowledge workers by 2025. And I think it’s actually going to be probably that you could stretch that out to more than knowledge workers—probably for everyone.
And the reason we say that is because what we’ve seen, this swathe of recent announcements for generative AI, we’ve seen them being embedded into workplace tools such as Microsoft Office, Google Workspace—all of the different types of solutions that are out there. I mean, Martin actually mentioned earlier that pretty much you now get generative AI capability right across the board—almost any tool you open up and there’s a generative AI chat box there.
And I think the thing is we have data already showing that the actual—the more proficient you are, i.e., the better that you are in creating those prompts, which means if you’re trained as well, the better the productivity and the better the quality. And I think people are going to recognize that that is going to be a real factor to having an efficiency factor and an effectiveness factor for the workplace, workflows, and especially if you’re trying to streamline your workflows, make them more effective.
So I think that proficiency side is going to be a real big thing. So my advice is to start looking at how best to build out those prompts, and taking some sort of training support in that. It’ll follow the way of many technologies—you know, the better you are with them, the more effective that you can get them to be.
Martin Garner: And I think there is already a job title called Prompt Engineer, isn’t there? I’ve heard about this.
Bola Rotibi: Exactly, yes.
Martin Garner: And what I think you’re saying, Bola, is that that’s going to evaporate and everybody’s going to be good at it.
Bola Rotibi: Yeah, exactly. I mean, prompt engineering is going to be like, well, that’s it. Just another engineering. But the thing is is that we can joke about it, but I think really in the same way that—and it’s going to be a little bit more than just being good at search terms. I think everybody’s going to have to really sort of learn how best to construct your prompts to get the best insights and the best information out of it.
Christina Cardoza: Yeah, absolutely. It’s almost like the AI developers, you don’t really call them AI developers anymore because everyone’s developing for AI. Everyone needs to know how to develop for AI, so they’re just developers. But, Martin, anything you want to leave us with today?
Martin Garner: No, no, I think that sort of wrapped it up really quite nicely. Thank you, Christina.
Christina Cardoza: Yeah, absolutely. Well, thank you guys again. It’s always a pleasure to have both of you on the podcast, and it has been a great, insightful conversation. I invite all of our listeners to take a look at the CCS Insight 2024 prediction for IoT. We will host it and make sure that we make a link available to our listeners to check that out. And I want to thank you guys for listening to the podcast. Until next time, this has been the IOT chat.
The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.
This transcript was edited by Erin Noble, copy editor.