George Lynch, Head of Technology Advisory at NashTech, shares what you should be thinking about when trying to escalate your artificial intelligence plans. This article first appeared on ComputerWeekly.com.
Artificial Intelligence (AI) and in particular generative AI has the potential to be truly transformative. It’s the next great evolution of how we use technology, following on from the mass cloud adoption of the last fifteen years.
But while the excitement and anticipation is huge, what is the actual way forward with AI? How can organisations operationalise it to deliver business benefits?
It’s a question that’s very much on the minds of technology leaders. The 2023 Nash Squared Digital Leadership Report, which takes in the views of over 2,100 technology leaders around the world, finds that seven in ten tech leaders believe the benefits of AI outweigh the risks – but only 15% of them feel prepared for the demands of generative AI.
Only two in ten have an AI policy in place and more than a third (36%) have no plans to even attempt one at this time. As our report reflects, there is “excitement, confusion and concern in apparently equal measure.” For probably the first time in my career, people are genuinely having conversations around “Just because we can, should we?” AI is raising a whole new set of questions and debates.
The report also observes that, while large-scale AI implementations have been limited to date (only 10% of organisations), we are reaching a tipping point now due to the growing popularity of generative AI. It’s something I’m seeing with clients everywhere: they are almost all asking themselves what an adoption of AI and specifically generative AI could look like for them. What are the likely productivity benefits, what are the risks, and what are the costs?
I have mentioned cloud already – and in many ways, the point we have reached is reminiscent of the early days of cloud and SaaS. Then, many CIOs and digital leaders were nervous about the move, fearing the rise of ‘shadow IT’ and a loss of control within the organisation. But the most successful leaders realised that this was something that couldn’t be held back – they needed to embrace it and manage it by leading the evolution rather than attempting to micro-manage it.
The same applies here. In fact, it applies even more. Because whereas there was a degree of optionality over cloud and SaaS – it was essentially up to the CIO whether and when the organisation moved to it – with generative AI there isn’t that same element of choice: staff in the business will start using it (and already are). Realistically, there is nothing the CIO can do about that.
That’s why supportive guidance and policies for staff are essential because there are some obvious risks to generative AI. At a basic level, these include:
Data privacy and confidentiality is a particular issue – it is the second highest concern in our Digital Leadership Report (36% of tech leaders), ranking only behind the need for effective regulation (42%).
While the hesitancy about creating AI policies is understandable in such a new field, it needs to be overcome as soon as possible. It is better to have an imperfect policy that you commit to update than no policy at all. Basic protocols need to be clear and understood. Staff need to be supported to make good decisions. Alongside this, businesses should be supporting their staff by bolstering AI literacy – holding training and awareness sessions, discussion forums, online training resources etc.
Our research finds that nearly half of organisations have some form of AI implementation or pilot in play. When it comes to generative AI, that figure is around a third. My advice for them to make this a success – and for other businesses that have not yet started – is to remember a few simple key principles.
Firstly, remember that AI does not fall under the sole ownership of the IT function – so create a multi-disciplinary team to look at it involving other key stakeholders such as HR, Finance, Legal and Marketing. Consider also giving overall responsibility for AI to one person in the business leadership team as part of their role. This will provide more clarity around accountability. In many businesses, responsibility for AI is quite amorphous at present. Having an AI leader will also help to take it out of Board or executive committee theoretical discussion and move it into a more practical, action-oriented domain.
Don’t try to use AI to solve everything at once – be clear about what specific use cases you want to employ it for. This could be any of a myriad of things including:
Identify the areas that have the highest potential to add value and focus on those. We did just this in our business by assembling a multi-skilled team to create an intelligent chatbot called “BonBon”. Using OpenAI’s technology, this chatbot is now allowing our clients to automate tasks with a human-like interaction, such as onboarding new employees or responding to customer queries.
It is also highly advisable to consider working with an external, independent technology consultancy who can give you objective advice and guidance. In such a new area, this is a time for consultancy to step up.
Finally, be clear on what your corporate ambition is. Do you want to be an early adopter, leading the way and creating a competitive advantage? Or a fast follower, with lower risk and potentially lower cost? Or are you content to move much more slowly – what some would describe as a ‘laggard’ – minimising risk and waiting until the technology and the use cases are more widely available and their robustness is proven?
For some, early adoption makes powerful sense – such as businesses with large numbers of people using technology to do the same things, like a service centre or customer service operation. For these businesses, cost savings may be the primary driver. Fast followers are likely to be businesses that see the opportunity to drive value creation by harnessing AI to free up people to focus on more value-adding tasks.
Wherever you sit on the spectrum, AI is going to have a massive effect. There is some hype of course, but this will right-size itself over time. We normally tend to overestimate the impact of new technology in the short term and underestimate it in the medium to long term. The goal must be to harness AI, under the decision-making control of human beings, for genuine improvements in efficiency, performance and outcomes – we want it to be omnipresent but not omnipotent. That’s the balance we need to collectively strive for.
The author is George Lynch, Head of Technology Advisory at NashTech, part of Nash Squared. The Nash Squared Digital Leadership Report 2023 is based on the world’s largest and longest running annual survey of technology/digital leadership. Over the last 25 years, the research has taken in the views of over 50,000 technology leaders. To register to receive a copy of the report, click here.