The release of chat GPT shot AI well and truly into the mainstream, leaving organisations scrambling to work out what the impact is going to be. One of our clients that has been giving this a lot of thought is Heathrow, Europe’s busiest airport. Our Chief Technology Officer, Kev Smith, sat down with Heathrow’s Head of Cloud and Data, Andy Isenman, to discuss how AI is impacting the airport and the challenges that they're looking to try and head off over the coming months and years.
Preparing for AI
Kev: 2023 has been described as generative AI’s breakout year. I'm interested in what you think that means for Heathrow and other organisations over the coming months and years?
Andy: It's definitely a really interesting time and I think that Chat GPT, and the follow-on race between large technology organisations, has really brought AI to the imagination of the whole world. It's important to remember that it's been going on for a little while now; it's many years since the computer started beating chess players using AI. I think it's a great opportunity; now the world's imagination has been set by it, we've got a real opportunity to deliver AI across the board – not just the generative AI that we see being talked about today.
In terms of where our business, and businesses maybe, should be looking, I think there's got to be a good look and review as to how AI can improve your business. It's got to be well thought out, because if it's not it's going to be an incredibly expensive game for people to play.
The other area that is important to look at is the upcoming regulation. Some organisations will make some advances that will then subsequently be regulated; some other organisations will wait until regulation takes place before committing. My view is that you need to find the balance between the two in order to make the most of the technologies that are available to us at the moment, keeping in line with where we think future regulation may go.
Kev: In terms of regulation, do you have an idea of where you think that might go?
Andy: I think there's some interesting ideas out there. For a UK company, I would suggest broadly aligning yourself with the ICO is a good idea if you're going to use data. I think the US act and the EU act will ultimately become the gold standards for the world, given that was predominantly where we went with things like GDPR regulation. So, I'm looking quite closely at the EU AI act and the US equivalent as a benchmark of where we think the regulation will fall. I think you then need to take a slightly more conservative route from that. That's broadly where we think our appetite for AI exists.
Kev: And in terms of AI and its future use in Heathrow, what kinds of things are you looking at now or looking to develop over the coming years?
Andy: There's a fantastic opportunity around passenger experience. One great opportunity presented by the large language models will be around language translation and how we can drive a more inclusive experience for our colleagues and passengers using some of the technology. I think there'll also be some efficiency gains and some automations that we'll be able to get out of it. We're very much at the start of that journey. Our focus is looking to see if AI can actually help us for a given problem; we don’t just want to use AI for the sake of it (as some others may be doing) .
Kev: Do you think that there's a danger that some organisations will look at AI like a magic wand?
Andy: I think the market is generally pitching AI as a magic wand. And I think they could end up being incredibly expensive magic wands – or magic wands that subsequently need to comply with future regulation.
Kev: A lot of organisations sense that they should be using AI, but many of them simply aren't ready to implement it. What prep work do you think enterprise needs to be undertaking to be ready for it?
Andy: There are two things that you need to be really prepared. Firstly, quality of data and data governance. You need to ask yourself: do you have good quality data? Is that data going to drive a differentiation in experience? Is it well organised, well-structured, and well understood? The second is compute, and one of the challenges with cloud is that it can be used as much as you can possibly consume, which going to drive a significant amount of cost. You need to be working with a partner organisation that's looking at your cloud computing for you to make sure that you're able to deploy these technologies within your cloud infrastructure cost effectively. If you don't implement it correctly – if you don't have engineers and architecture owners really focusing on the implementation of these technologies – you're going to find that it's going to become very costly very fast. We saw that with cloud, and we're going through a similar pattern now.
Kev: Is AI something that you think will be aligned to the CDO (Chief Data Officer), under the data banner, or do you think it is heading somewhere else?
Andy: I think it's the next big debate. My personal view is that I think that AI will lend itself to a Chief Technology Officer role, or a CDO with some really good technology skills in terms of the underlying architecture, because AI is as much a compute problem as it is a data problem. I don't think you'll solve the AI agenda without looking at it through both lenses, so I think it will come back towards maybe a CTO role more. However, there are lots of very talented CDOs out there that understand the technology very well and I think they’ll thrive in this environment too.
Kev: Cost control is one of the areas that that needs to be carefully looked at. What are some of the other risks around not getting the foundations right?
Andy: For cost, not having the quality of the data and getting distracted in your business, looking at use cases for AI that don't directly improve their business. You need a really clear focus on what you're trying to achieve as a business – and question if AI can genuinely help you achieve that. A good example is a customer service organisation putting in an AI chatbot, taking people away from humans; you need to ask how many of your customers want to speak to humans on a day-to-day basis versus artificial intelligence? Furthermore, you must consider the fact that the regulation may get to a point where you have to declare every time your customers are interfacing with artificial intelligence, ending up with a sort of cookie scenario where every time you do something you've got to agree to talk to AI. So you got to think: will your customers want that? Is the appetite there?
Kev: We've noticed the conversations with our clients changing from ‘What can we do with our data and how can we get more intelligence/gain more insights from our data?’, to ‘How can we use AI and what should we do with AI?’ You mentioned the importance of getting your data right before starting to think about what you should be doing with AI, could you elaborate on that?
Andy: Particularly around, Large Language Models (LLMs) as an example, the volume of bias and the volume of hallucination will be directly proportional to the quality of the data you’re putting in. So, if you're loading an LLM with a lot of documents that are factually incorrect, you can be absolutely guaranteed that the answers the LLM gives you will be factually incorrect. What I mean by that is you've got to make sure that you're training the system on data that represents itself. The current LLMs available on the internet are based off volumes of information on the internet; so the grammar, punctuation and spelling of an LLM on the internet is the sum total of the punctuation, grammar and spelling on the internet – which isn’t a perfect source of data. So, having a really clean, accurate source information is important.
Kev: Thinking about the LLM, Chat GPT space, how much of an impact do you think that will have at Heathrow as compared to some of the models people don't talk about or the more niche uses of AI?
Andy: It’s a catalyst for change. It's brought AI to the table in terms of the conversation, but some of the work that's been done around computer vision is really important and there are other areas – there are many, many other disciplines. But LLMs have certainly allowed us to take a sharper focus on data science and some of the recommendation engine type stuff that you can work on – the LLM stuff has broadened the conversation. I don't know yet, but I'm interested to find out how much of our estate will be powered by that particular type of technology in the end, and how much of it will have generated other conversations within other disciplines in the field.
Kev: Many people are worried about the potential impacts of AI and, as with any kind of new technology or innovation or kind of large societal change, people worry about their jobs and future prospects. Do you see AI having a big impact on the workforce? What are the things you think AI could do well, and what are the things you don't think it'll ever be able to replace?
Andy: I can't speak for the global workforce as a whole, but I have a really strong passion for ethics. I think the next big conversation in any organisation should be: what are our ethics as an organisation, and how is artificial intelligence going to align to the ethics of our organisation? I don't think that's through policies, as defining ethical policies probably isn't going to work. But you must create a conversation within the business to draw people from right across the business that have an interest in ethics and doing the right thing, and understand how that group of people are going to shape what an organisation feels it should or shouldn't be doing with artificial intelligence. Fortunately, at Heathrow we have some really strong ethical values and there's going to be plenty of engagement around the AI ethics topic at Heathrow for as many years as this goes on. But we should always decide as organisations about what we're doing: is it the right thing to do for our customers and for our society? If the answer is yes, then we should do it. If it’s not the answer, then that's not something we should be doing with AI. But it's a collective problem – a global collective problem – and at the moment, it seems like regulation is the only tool there is to manage it at the global level.
Kev: So, with Heathrow already starting to think about the ethics side of things, keeping one eye on the regulation that's just around the corner, in terms of general governance around AI are there any other things that you're exploring at the moment?
Andy: There must be something around financial governance, particularly around making sure that we've got a good understanding of how we're going to financially govern the AI. I think there's also an angle around duplication and how do we understand how the entire organisation is using it. It's going to be a democratised technology; it's not going to be a technology that can be controlled by a CDO or by a CIO – the business cases are beyond that now. The other piece of governance that we're looking at is how do we ensure that we're not duplicating similar use cases? How do we make sure we're not missing opportunities for people to do something, but who don't have a vehicle to achieve what they need to achieve?
Kev: We've mentioned financial cost control, and I've heard horror stories of cloud costs being sucked up through LLMs’ running costs. How do you think other organisations should be trying to manage that?
Andy: To start with, take a good, long, hard look at the cost of what you’re about to deploy and use the cloud cost calculators that are available to you. Secondly, consider, do I need it now or in six months’ time? Some of the smaller LLMs are going to be laptop size, or even mobile-phone size in terms of the availability to run in 6, 12 or 18 months’ time so if you're looking to solve a use case that doesn't need to be solved for 18 months, but you want to invest in an incredibly expensive piece of LLM technology today, you might be better thinking about investing a little bit further down the line when the cost of compute comes down as these models become more efficient.
Kev: Do you think that's the direction it’s going to take? Are we going to end up in a situation where we've got the Big Four with their LLMs gobbling up everyone else, or do you think it will niche down and people will start creating and using more niche models?
Andy: That’s the risk. It will depend on how the open-source community responds to this and if the open-source community can keep a lot of this open source then we've got a better chance of it sitting across a broader range of people in society than if all the knowledge or the skills or the ability to execute end up in four big, organisations. That would be the answer. So, keep a close eye on how well the open-source community is doing in the conversation.
Kev: Do you think the future of AI is going to be led by universities or big tech?
Andy: Big tech, easily. You’ve got to understand that the innovation budget of one of the leading organisations or one of the hyper-scalers is bigger than the total value of tech in UK universities. If you look at the entire universities’ tech budgets, as they stand at the moment, Microsoft is spending more year on year on innovation. I'm not sure how universities are going to outpunch technology companies in that sort of respect; it'll be interesting to see what happens with the university approach. I think where it will be successful is where those technology companies come together with universities so that universities can carry on turning out the talent that's needed in order to be able to fuel this wave. Then the technology companies can use that talent to drive innovation forwards.
Kev: I know you're doing a lot of thinking about this, both personally and professionally. Is there anything really in the forefront of your mind at the moment in terms of this whole space?
Andy: I think the only thing we haven't covered around the Chat GPT conversation is really thinking about what you're putting into them and what information you're feeding these models at the moment, particularly from a corporate perspective. We're starting to see more options now where corporations can use these types of technologies, without the information that's being put through them feeding the models. I don't think you can regulate against your employees using these types of technologies, so training is super important, which is following a similar path to cyber security; as organisations, we’re never going to be in a position with cyber security to just stop everyone being hacked, but we can increase the knowledge of the employees much quicker if we tackle the subject, head on. So don't ban the technology; educate your employees.
Kev: Do you think there’s a danger we’re heading for another Cambridge Analytica moment with AI and Chat GPT with people sticking stuff into these engines that are hoovered up for data collection?
Andy: Someone somewhere is using AI at the opposite end of where I would see the ethical spectrum being. I'm sure AI will have its incredible success stories when we discover fantastic things that will really push the world and humanity forwards but, naturally, we have to expect that there will be other people at the other end of the spectrum. Hopefully not, but I suspect that the reality is that something somewhere will occur in this space.
Kev: But an exciting space to be in, nonetheless.
Andy: I think this is a real pivotal moment for the world so, it's a really interesting time. It’s that ability to be able to capture the imagination of non-technologists incredibly fast that tends to cause a shift, and this round of AI, the generative AI phase, has captured the imagination of the world. I think it's probably similar to the to the mobile revolution, that's probably where I would place it on that scale.
Read more about the Azure OpenAI Service.
If you’re looking to get ready for AI, get in touch