Skip to main content

GenAi & LLM: Impact on Human Jobs

I met an IT Head of a leading Manufacturing company in a social gathering. During our discussion, when he convincingly told me that current AI progress is destructive from the point of jobs done by humans and it’s going to be doomsday, I realized that many would be carrying a similar opinion, which I felt needs to be corrected.

A good starting point to understand impact of AI on jobs done by humans today is the World Economic Forum’s white paper published in September 2023 (Reference 1). It gives us a fascinating glimpse into the future of work in the era of Generative AI (GenAi) and Large Language Models (LLM). This report sheds light on the intricate dance between Generative AI and the future of employment, revealing some nuanced trends that are set to reshape the job market. Few key messages from the paper are below.

At the heart of the discussion is the distinction between jobs that are ripe for augmentation and those that face the prospect of automation. According to the report, jobs that involve routine, repetitive tasks are at a higher risk of automation. Tasks that can be easily defined and predicted might find themselves in the capable hands of AI. Think data entry, basic analysis, and other rule-based responsibilities. LLMs, with their ability to understand and generate human-like text, excel in scenarios where the tasks are well-defined and can be streamlined.

However, it’s not a doomsday scenario for human workers. In fact, the report emphasizes the idea of job augmentation rather than outright replacement. This means that while certain aspects of a job may be automated, there’s a simultaneous enhancement of human capabilities through collaboration with LLMs. It’s a symbiotic relationship where humans leverage the strengths of AI to become more efficient and dynamic in their roles. For instance, content creation, customer service, and decision-making processes could see a significant boost with the integration of LLMs.

Interestingly, the jobs that seem to thrive in this evolving landscape are the ones requiring a distinctly human touch. Roles demanding creativity, critical thinking, emotional intelligence, and nuanced communication are poised to flourish. LLMs, despite their impressive abilities, still grapple with the complexity of human emotions and the subtleties of creative expression. This places humans in a unique position to contribute in ways that machines currently cannot. But here the unique ability of LLMs to understand context, generate human-like text, and even assist in complex problem-solving, positions them as valuable tools for humans.

Imagine a future where content creation becomes a collaborative effort between human creativity and AI efficiency, or where customer service benefits from the empathetic understanding of LLMs. Decision-making processes, too, could see a paradigm shift as humans harness the analytical prowess of AI to make more informed and strategic choices.

There is also creation of new type of jobs, emerging jobs as it is called. For example, Ethics and Governance Specialists is one such emerging job.

The said paper further nicely brings together a view of job exposure by functional area and by industry groups: ranked by exposure (augmentation and automation potential) across large number of jobs to give reader a feel of what is stated above.

In essence, the report paints a picture of a future where humans and AI are not adversaries but partners in progress. The workplace becomes a dynamic arena where humans bring creativity, intuition, and emotional intelligence to the table, while LLMs contribute efficiency, data processing power, and a unique form of problem-solving. The key takeaway is one of collaboration, where the fusion of human and machine capabilities leads to a more productive, innovative, and engaging work environment. So, as we navigate this evolving landscape, it’s not about job replacement; it’s about embracing the opportunities that arise when humans and LLMs work hand in virtual hand.

 

References:

1.      Jobs of Tomorrow: Large Language Models and Jobs, September 2023. A World Economic Forum (WEF) white paper jointly authored by WEF and Accenture. https://www3.weforum.org/docs/WEF_Jobs_of_Tomorrow_Generative_AI_2023.pdf

 

 

Insights into AI Landscape – A Preface

AI Landscape and Key Areas of Interest

The AI landscape encompasses several crucial domains, and it’s imperative for any organization aiming to participate in this transformative movement to grasp these aspects. Our objective is to offer our insights and perspective into each of these critical domains through a series of articles on this platform.

We will explore key topics each area depicted in the diagram below.

1.      Standards, Framework, Assurance: We will address the upcoming International Standards and Frameworks, as well as those currently in effect. Significant efforts in this area are being undertaken by international organizations like ISO, IEEE, BSI, DIN, and others to establish order by defining these standards. This also encompasses Assurance frameworks, Ethics frameworks, and the necessary checks and balances for the development of AI solutions. It’s important to note that many of these frameworks are still in development and are being complemented by Regulations and Laws. Certain frameworks related to Cybersecurity and Privacy Regulations (e.g., GDPR) are expected to become de facto reference points. More details will be provided in the forthcoming comprehensive write-up in Series 1.

2.      Legislations, Laws, Regulations: Virtually all countries have recognized the implications and impact of AI on both professional and personal behavior, prompting many to work on establishing fundamental but essential legislations to safeguard human interests. This initiative began a couple of years ago and has gained significant momentum, especially with the introduction of Generative AI tools and platforms. Europe is taking the lead in implementing legislation ahead of many other nations, and countries like the USA, Canada, China, India, and others are also actively engaged in this area. We will delve deeper into this topic in Series 2.

3.      AI Platforms & Tools: AI Platforms and Tools: An array of AI platforms and tools is available, spanning various domains, including Content Creation, Software Development, Language Translation, Healthcare, Finance, Gaming, Design/Arts, and more. Generative AI tools encompass applications such as ChatGpt, Copilot, Dall-E2, Scribe, Jasper, etc. Additionally, AI chatbots like Chatgpt, Google Bard, Microsoft AI Bing, Jasper Chat, and ChatSpot, among others, are part of this landscape. This section will provide insights into key platforms and tools, including open-source options that cater to the needs of users.

4.      Social Impact:  AI Ethics begins at the strategic planning and design of AI systems. Various frameworks are currently under discussion due to their far-reaching societal consequences, leading to extensive debates on this subject. Furthermore, it has a significant influence on the jobs of the future, particularly in terms of regional outcomes, the types of jobs that will emerge, and those that will be enhanced or automated. The frameworks, standards, and legislations mentioned earlier strongly emphasize this dimension and are under close scrutiny. Most importantly, it is intriguing to observe the global adoption of AI solutions and whether societies worldwide embrace them or remain cautious. This section aims to shed light on this perspective.

5.      Others: Use Cases and Considerations:  In this Section, we will explore several use cases and success stories of AI implementation across various domains. We will also highlight obstacles in the adoption of AI, encompassing factors such as the pace of adoption, the integration of AI with existing legacy systems, and the trade-offs between new solutions and their associated costs and benefits.  We have already published a recent paper on this subject, and we plan to share more insights as the series continues to unfold.

Small talk about Large Language Models

Since its formal launch, ChatGPT has been receiving a lot of press and has also been the topic of – heated – discussions in the recent past.

I had played with generative AI some time back and also shared the result in one of my earlier posts.

Post ChatGPT, the investments in AI – or more specifically generative AI tech – based companies has seen a sharp rise.

There is also a general sense of fear – rising from uncertainty and the dread of the possibility of such technologies taking away specialized jobs and roles has been noticed across industries.

I was talking to an architect a few days ago and she said that in their community, the awe and fear of AI tech is unprecedented.

With just a few words, some of the sketches generated by tools like Dall-E, Craiyon, Stable diffusion etc are apparently so realistic and logical.. for example, when the query was to have the porch door opening out into the garden with a path to the main gate.. the image was generated in less than a couple of minutes..

With all the promise of creating new content quickly, many questions have also come up, without clear answers.

The first – also a topic of interest on aithougts.org – is that of ethics.

Whether it is deep fakes – btw, I had experimented with a technology that could have been used for this – when I was looking for tools to simplify podcast editing – on a platform called Descript – where I could train the model with my voice.. I had to read a predefined text for about 30 minutes – and then, based on written text, it could synthesize that text in my voice.. At that time, the technology was not yet as mature as today and so, I did not pursue.

I digress..

Getting back to the debate on generative AI, ethics of originality [I believe that there are now tools emerging that can check if content was generated by ChatGPT!] that could influence how students create their assignment papers.. or generate more marketing content, all based on content that is already available on the net – and ingested by the ChatGPT transformer.

Another aspect is the explainability of the generated content. The bias in the generated content or when there is a need for an expert opinion to also be factored in, would not be possible unless the source is known. The inherent bias in the training data is difficult to overcome as much of this is historical and if balanced data has not been captured or recorded in the past, would be very difficult to fix, or at least adjust the relevance.

The third aspect is about the ‘originality’ or ‘uniqueness’ of the generated content – let me use the term solution from now on..

There is a lot of work being done in these areas, some in research institutions and some in companies applying them in specific contexts.

I had an opportunity recently to have a conversation with the founder of a startup that is currently in stealth mode, working on a ‘domain aware, large language model based’ generative AI solution.

A very interesting conversation that touches upon many of the points as above.

 

You can listen to this conversation as a podcast in 2 parts here:

https://pm-powerconsulting.com/blog/the-potential-of-large-language-models-with-steven-aberle/

https://pm-powerconsulting.com/blog/episode-221/

 

Or watch the conversation as a video in 2 parts here:

https://www.youtube.com/watch?v=86fGLa9ljso

https://www.youtube.com/watch?v=f9DnDNUwFBs

 

Do share your comments and experiences with the emerging applications of GAN, Transformers etc.