Contact Us

Phone
+612 9188 7832

Email
Info@TriangulateKnowledge.com

Address
Global HQ:
Level 10, 70 Pitt St, Sydney NSW Australia

Online Enquiry

* Required fields

2023 The Impact of Generative AI on Knowledge Work

Posted By Jonathan Usher  
Nov 14 2023

The Impact of Generative AI on Knowledge Work

Throughout 2023 the world has been bombarded with coverage of the impact that generative artificial intelligence (AI) will have on businesses, and on jobs. Fueled initially by enthusiasts’ use of early tools, like OpenAI’s ChatGPT, to generate poems, stories, recipes, and so on, public awareness and interest grew massively with the unveiling of investments and breathless demos by tech giants (for example the new Bing Chat, and Google’s Bard). Since then, it has felt like being on an exponential curve of news and progress, though in fact the exponential curve started well before 2023. That’s the thing about exponential curves – unless you are paying close attention, the early stages often don’t feel exponential at all.

Yet for all this investment, focus, and activity, we are only now starting to glimpse the likely impact of generative AI technologies on knowledge workers and their jobs – and even now, the learnings are, well, confusing. The most recent sizable, real-world study suggests that generative AI significantly helps knowledge workers, except for many workers who it does not help much. And generative AI can aid greatly in accuracy, however there are areas where its use can greatly decrease accuracy. What to make of these seeming contradictions? Perhaps we should start with the topic of productivity, which is all about how to get more output for the same, or less, amount of input.

For every major new technology shift that promises to bring new levels of productivity, at least for the last couple of decades it seems that, looking back, the actual impact on worker productivity was small, or even negative. Yes, your smartphone is a miniaturised marvel of a computing device that is connected to essentially all the world’s information (what a productivity boon!), but at the same time what we often seem to be most connected to is an endless photo stream from “influencers”, or we find ourselves heads down in whatever today’s word puzzle is.

While generative AI offers a wide array of potential applications that could revolutionise productivity, it's important to address the flip side of the coin: the risk of distraction and cognitive offloading. With AI systems becoming increasingly adept at a range of tasks, from data analysis to content generation, the temptation for knowledge workers to "offload" cognitive tasks may become more enticing. This offloading can lead to a disengagement from the very work the technology was designed to assist with, pulling workers into tangential or irrelevant activities. As we strategise the deployment and impact assessment of generative AI in the workplace, we must weigh not only its benefits but also the pitfalls that come with an overreliance on automated systems. And this is aside from other vital considerations such as bias, fairness, and privacy. Let's delve into some examples to better illustrate the nuanced impact of AI on worker productivity.

Real World Findings on Productivity Impact

A study published in early 2023 made waves as it looked at the impact of generative AI tools on workers, examining the productivity of customer service agents in a contact centre. This study found that the use of generative AI decreased the time agents needed to handle chats, increased the number of chats agents were able to handle each hour, and even slightly increased the number of successfully resolved chats. Overall agent productivity rose 14% when using generative AI – an impressive result, and one which seemed to kick start a series of excited reports about the broader impact of AI on worker productivity, GDP growth, and on the future of jobs.

More recently, a groundbreaking study has shed light on how generative AI impacts the productivity of knowledge workers. This collaborative research included experts from Wharton, Harvard, MIT, the University of Warwick, the University of Pennsylvania, and Boston Consulting Group (BCG). The study looked at the performance of 758 BCG consultants and focused on the capabilities of OpenAI’s GPT-4, without any custom fine-tuning.

The study involved 18 tasks, carefully chosen to represent the range of responsibilities typically found in a consulting firm—these spanned creative, analytical, and writing/marketing functions. The findings were that consultants equipped with GPT-4 substantially outpaced their non-AI-using counterparts. On average, the AI-assisted consultants completed 12.2% more tasks, executed their duties 25.1% more rapidly, and achieved 40% higher quality outputs. Quality was gauged through evaluations by both human and AI graders, interestingly finding a high level of agreement between the two.

A noteworthy observation was that the least efficient consultants, as initially assessed, experienced the most significant performance improvement - by 43% - by leveraging AI. Although top-performing consultants also benefited, their gains were much more modest. This phenomenon, referred to as "skill levelling," has been corroborated by at least one other study, and has substantial implications for workforce upskilling.

A particularly intriguing takeaway from this research, one that I believe matters to all businesses seeking to use AI tools, is the concept of the "Jagged Frontier." This describes the AI's inconsistent competencies: excelling at certain tasks (for example idea generation, and summarising) while underperforming or even failing at others (for example, basic math). Complicating the matter further, this "frontier" is not clearly documented and continually evolves, often unpredictably, as AI models are updated, or new ones are introduced.

In an experiment, the researchers pinpointed a task outside of this "Jagged Frontier," designed to exploit AI's limitations in that it would likely provide an incorrect, albeit plausible, solution. Human consultants solved the problem accurately 84% of the time without AI intervention. However, their success rate plummeted to 60-70% when using AI—a phenomenon likened to "falling asleep at the wheel." When AI seems proficient, people may feel they have less reason to work hard and pay attention. They let the AI take over, instead of using it as a tool, and made more errors. Thinking back to our earlier discussion of distractions, this is a real danger area for us all to look out for.

Some thoughts based on these studies and my own experimentation:

Especially as technology companies roll out next generation tools that connect more deeply with the apps that workers use, and with the data that the company generates (think Microsoft’s Copilot, Google’s Duet, and others), it’s important that people seeking to use these tools have a good understanding of where the “jagged frontier” is at any given point in time, and that the tools are used for tasks within that frontier.

A pragmatic approach to help with accuracy is to use generative AI only for tasks that you know how to do. Additionally, it’s important to rigorously verify the AI's outputs, especially when these are linked to sensitive organizational data such as emails, sales metrics, and presentations.

Having examined some effects of generative AI on productivity, it's clear that this technology is more than a simple tool; it's a transformative force that will likely redefine the scope and nature of work itself. However, any discussion about productivity gains would be incomplete without considering the broader implications for the labor market. While AI automates tasks and potentially boosts efficiency, it also stirs concerns about job displacement.

The Impact of AI on Jobs

A notable study published by OpenAI, OpenResearch, and the University of Pennsylvania, found that nearly 80% of the U.S. workforce could experience at least a 10% change in their job tasks due to the advent of generative AI tools such as ChatGPT and Bard. Additionally, for 19% of workers, the impact may alter at least half of their job functions.

The study delineated job-specific tasks and defined a task as being "exposed" to AI influence when it could be completed in half the time by an AI-assisted worker, compared to one without AI, while maintaining the same quality. In practical terms, this implies a minimum twofold increase in productivity for such tasks.

When workers utilise generative AI, approximately 15% of all tasks could be expedited significantly without compromising quality. This figure skyrockets to between 47% and 56% when additional software is integrated with the underlying generative AI technology. This implies that the true value of generative AI is most likely to be unlocked not just by the large language models themselves, but through specialised applications built on top of them. Anyone who has dabbled with tools built using GPT-4’s application programming interface, or its advanced data analytics mode, can attest to the enhanced capabilities already available.

Interestingly, critical thinking skills displayed a negative correlation with susceptibility to AI impact. Occupations requiring such a skill appear to be more insulated from the influence of at least current generation AI tools.

While the categorization of job-specific tasks by non-experts could result in some mischaracterization, it also provides a useful framework for understanding more precisely how certain roles might evolve. This helps build a roadmap for potential skill training to help workers adapt, flourish, and pivot as their job descriptions transform.

Where to from here?

The landscape of generative AI is both broad and intricate, and it stands to reshape the fabric of knowledge work. As we stand at the dawn of this technological revolution, it is important that we wield the double-edged sword that is generative AI with finesse and caution. Organizations must arm themselves with a solid (and dynamic) understanding of the so-called "Jagged Frontier," the boundary between where generative AI excels in areas as opposed to lacking in others. Using AI inside the frontier can bring significant benefits; to use it outside the frontier can increase risk and reduce accuracy.

It's not enough to merely integrate generative AI tools; the key lies in judicious application aligned with a company's specific needs. Tools like Microsoft Copilot and Google Duet will offer functionality and integrations that can help productivity, but they also necessitate a nuanced approach to implementation. Just as you wouldn't use a hammer for precision surgery, generative AI should be applied to tasks where its capabilities offer real advantages while containing the risks to accuracy or integrity. And of course, other considerations such as bias, fairness, security, privacy, and governance of AI need careful thinking, planning, and ongoing attention.

Early identification of roles and tasks most likely to be impacted in the workplace is important for effective strategic planning. Companies should create a roadmap that outlines necessary skill training and role adjustments, perhaps even utilizing AI-based training solutions to facilitate this transformation. Such proactive measures can help not just in retaining talent but also in fostering a culture of adaptability and lifelong learning.

The discussion around the impact of generative AI on employment should not devolve into a binary dialogue of jobs gained or lost. The real conversation is about transformation. How do we adapt, how do we pivot, and how do we thrive in a rapidly evolving workspace? Critical thinking skills, interestingly, appear to be a buffer against downside aspects of rapid AI-induced change.

The benefits of these emerging AI tools are many, from increased productivity to skill levelling, but we need to recognise that these can come with their own set of complexities. The future is not about AI versus humans; it's about AI and humans. It's about co-evolution. And on this journey, we need to bring caution, preparedness, and adaptability along with us.

 

About the Author
Jonathan has led and grown a range of businesses, both national and international, building strong customer and enterprise value – working closely with stakeholders to ensure their purpose and vision are clear; strategies and goals are well defined, aligned, and focused. Managing Director / CEO level roles at Datacom Group for 8+ years, Jonathan led successful technology businesses which span New Zealand and Australia, including large SaaS, cloud, enterprise software, and data centre businesses. He has led industry-focused teams and businesses (products and solutions for government, payroll/HRIS, education, health care, telecommunications, media and entertainment), the commercialisation of a range of software products, and has developed international third party technology product ecosystems and partnerships. Jonathan has considerable experience working with and on Boards of Directors and brings 16+ years experience in technology product management, industry and consumer marketing, business strategy and business development. Jonathan now applies his extensive experience, offering a range of management consultancy services to help leaders in other organizations achieve the next level of their success.

If you have further questions, would like to speak with the author or other similar experts please call +612 9188 7832 or get in touch.