Skip to main content

Beyond the script: The future of digital customer service

In the past companies noticed that their customers are getting frustrated by waiting for customer service agents for simple queries. “All our agents are busy. Your call is important to us. Please wait.” became a dreaded message for customers looking for answer to their simple queries. So many companies launched Chat Bots as part digitalizing their customer service.

Chat Bots could address only one set of customers – those who had very basic queries. Chatbots are programmed to respond to specific commands or questions with predefined responses. Can’t grasp complex questions or deviate from their script. Offers generic answers based on programmed responses – Relies on pre-programmed rules and keywords. Doesn’t improve over time and requires manual updates.

Then came AI Chat Bots with a promise to divert more customers away from human agents. They are expected to be smarter and more Flexible due to utilization of Natural Language Processing (NLP) to understand intent and context. Can handle a wider range of questions and respond in a more natural way. Adapts and improves through machine learning, offering more relevant responses over time. Can tailor responses based on user history and preferences. These Chat Bots were expected to reduce response times and improving the overall customer experience.

So, what is the customer behavior after this advancement in Chat Bot technology?  A recent survey commissioned by customer experience platform CallVu throws some interesting light on it.

Figure 1: Source: CallVu – AI in Customer Service Survey Mar 2024

A significant percentage of people, 81%, indicated readiness to wait, for varying duration, to talk with a live agent. 16% people were ready to wait for more than 10 min to talk to live agents! Only 14% seems to be ready to go straight away for interacting with Chat Bots.

Now combine the above survey findings with the findings below, in the same report.

Figure 2 Source: CallVu – AI in Customer Service Survey Mar 2024

As CallVu found out, people rated live agents much higher than AI Assistants on most dimensions. Slight rating advantages for AI assistants on speed, patience and accuracy.

The interesting part to note is customers prefer talking to live agents for venting out frustration, indicating a role beyond just problem resolution – exhibition of empathy. However, it is again clear that customers prefer interacting with Chat Bots for simple queries with accurate answers. The Chat Bot interaction also seems to give a feeling of ‘being patient’ to the customers.

Does this mean there is no road ahead for digitalization of Customer Services interactions using Chat Bots? Few other surveys do show data to the contrary. The sixth edition of Salesforce’s ‘State of Connected Customer’ report clearly brings out the fact that 61% of the customers still would prefer to engage a self-service to resolve an issue. But there is a warning from 68% people that if there is a bad experience then they will never use self-service again for that company. With these findings, Salesforce makes a case for an opportunity to further improve the experience through a more intelligent, autonomous agents powered by Generative AI.

If we look at what Salesforce promises through its ‘Einstein’ Autonomous AI Service Agent, it gives a peek into what to expect from such agents when other Independent Software Vendors start delivering similar products into the market.

Sophisticated reasoning and natural responses: Fluid, intelligent conversations; coupled with logical inferences and connecting various pieces of information from company’s various data sources.

24/7 swift resolutions driven by trusted data: Grounded in company’s trusted business data.

Built-in guardrails: Including protection of PII (Personally Identifiable Information).

Cross-channel and multimodal: Self-service portals, WhatsApp, Apple Messages for Business, Facebook Messenger, SMS and so on.

Seamless handoffs to human agents: Seamless handoff to Human Agent, if needed, with full context of conversation. For example, if something needs to be handled outside defined policy.

Only time will tell whether this will move the needle in the right direction for customers to start relying on digital means more and more to get their service requests resolved. In the near future, we might see a hybrid environment where all three types coexist. Traditional chatbots can handle simple tasks, while AI chatbots manage complex interactions. Autonomous AI chatbots can take on more advanced roles, working alongside humans.

GenAI Adoption – Challenges in Manufacturing Enterprise

While discussions have been ongoing regarding the use of fine-tuned Large Language Models (LLMs) for specific enterprise needs, the high cost associated with cloud-based LLMs, including subscription fees and API usage charges, is becoming increasingly evident. This cost barrier has been a major hurdle for many enterprises seeking to transition GenAI-powered solutions from pilot programs to production environments.

Conversely, Small Language Models (SLMs) appear to be a more suitable option for businesses seeking specialized applications due to their lower cost and resource requirements. Enterprises typically operate with constrained budgets. Piero Molino (Chief Scientific Officer & Cofounder of Predibase, creator of Ludwig, formerly of Uber’s AI Lab), predicts that SLMs will be a major driver of enterprise adoption in 2024 due to their attractive financial proposition.

But within the enterprises sector, Manufacturing Enterprises will likely to be one of the slowest adopters of GenAI in their operations. Especially Medium and Small manufacturers. Let us explore the reasons because the combination of Industry 4.0’s data collection and connectivity with GenAI’s analytical and generative capabilities has significant potential to transform manufacturing into a more autonomous, intelligent, and efficient system.

Business Hurdles

Cost

The high cost of Large Language Models (LLMs) is a major hurdle for their adoption in manufacturing. LLMs require massive computing power for training and inference, making the enterprises reliant on cloud providers. However, cloud provider fees can scale rapidly with model size and usage, and vendor lock-in can be a concern.

Small Language Models (SLMs) offer a potential solution. Their lower computational footprint makes on-premises deployment a possibility. However, implementing SLM requires expertise in machine learning and LLM training, which some enterprises may lack. Hiring additional staff or finding a vendor with this expertise is an option, but maintaining an SLM on-premises can be complex and requires significant IT infrastructure.

For many manufacturing enterprises, the complexity and cost of on-premises SLM maintenance might outweigh the benefits of reduced cloud costs. This could lead them back to cloud based SLMs, landing them where they started.

Security Concerns

Security concerns around data privacy are a major hurdle for manufacturing companies considering both external vendors and cloud adoption. Usually Medium and Small manufacturers have viewed cloud with apprehension.

Change Management

Implementing Generative AI (GenAI) solutions can necessitate significant modifications to existing manufacturing software and may require changes to current processes. While change management might be straightforward for greenfield projects (entirely new systems), most implementations will be brownfield projects (upgrades to existing systems). Manufacturers are understandably hesitant to disrupt well-functioning manufacturing processes unless there’s a compelling reason. Therefore, a robust business case and a well-defined plan for minimizing disruption during change management are crucial.

Technical Hurdles

Data Challenges

GenAI models require large amounts of clean, labelled data to train effectively. Manufacturing processes can be complex and generate data that is siloed, inconsistent, or proprietary. So, unless there are existing Observability solutions which has captured sensor telemetry over period, the manufacturer cannot directly introduce a GenAI solution. Additionally, companies may be hesitant to share this data with external vendors.

Integration Complexity

Integrating GenAI solutions with existing manufacturing systems can be complex and require expertise in both AI and manufacturing technologies. Vendors may need to have experience working with similar manufacturing systems to ensure a smooth integration. Existing vendors may have to be roped in for the integration, which would incur additional cost. Integration governance could become complex.

Lack of Standardization

The field of GenAI is still evolving, and there is a lack of standardization in tools and techniques. This can make it difficult for companies to evaluate and select the right vendor for their needs.

Accuracy

SLMs are likely less susceptible to hallucination and bias compared to LLMs. SLMs are trained on a smaller amount of data, typically focused on a specific domain or task. SLMs have a simpler architecture compared to LLMs. Hence, they are less prone to situations where the model invents information or connections that aren’t there.

Data Quality Still Matters. Even with a smaller dataset, bias can still be present if the training data itself is biased. Bias, in case manufacturing systems, is about plant-shift based bias, machine life bias, role importance bias, vendor bias etc. Bias can also start building up through feedback loop from new production output.

Less Established Tools and Expertise

There are fewer established tools and frameworks specifically designed for SLMs compared to LLMs. Finding experts with experience in implementing SLM-based GenAI solutions might be more challenging.

Conclusion

What you will notice is that though there is cost reduction by using SLM instead of LLM, the challenges and hurdles remain almost the same. The hesitation that existed in Manufacturers for LLM based solutions remains for SLM based solutions. In many cases preventing the manufacturers moving from pilot to production. That hesitation needs to be tackled on priority basis to unlock the potential of SLM for the future of smart manufacturing.

Gen AI adoption: Is your budget ready?

As the adoption of Generative AI in the enterprise accelerates, one question that will be on Management’s mind: “What does AI cost?” The answer, like most things in business, is nuanced: it depends on the specific needs of the enterprise.

For a rough estimates, you can look at comparable businesses. For example, small enterprises with limited budgets might begin with AI-powered chatbots to automate customer support, freeing up existing staff for more complex tasks. But leaving at rough estimates is not a good approach.

Underestimating the importance of proper budgeting for adopting and operationalizing AI in the enterprise can be disastrous. A cautionary tale comes from cloud adoption, where unforeseen costs have triggered an exodus of businesses from cloud back to on-premises infrastructure.

Many sources, like those in Reference 1, meticulously list and explain the various cost factors involved. In a diagram here, these costs have been mapped onto different stages of Generative AI adoption. I haven’t elaborated on all stages in the diagram because some warrant their own detailed illustration.

Let us look at why a particular cost matters in the indicated stage. The assumption here is that the enterprises aim to fully implement and manage Gen AI themselves. The understanding would need little tweaking, but would hold good even when the enterprise decides to partially or completely outsources this activity.

Consultant Cost:

  • This is the cost of a consultant or consulting firm who will provide guidance and support throughout the AI adoption process.
  • In the diagram where this cost is not indicated, it is minimal compared to other costs at that stage.

Talent Cost:

  • Primarily encompasses the costs associated with reskilling current staff and hiring new talent.
  • Exercise caution, as advised by Hugo Huang, regarding the specific skills and headcount required for both AI solution implementation and ongoing maintenance.
  • Meticulous planning and budgeting are essential to prevent cost overruns.
  • While not explicitly indicated in the diagram in some stages, staff costs are assumed to be integrated within other categories such as Software Development Cost, Data Preparation Cost, and Rollout costs.

Cloud Cost:

  • Initial cloud costs will arise during the training phase, gradually scaling up to the target level during Rollout.
  • Carefully anticipate and plan for these costs, which are distributed across multiple stages.
  • If opting for an on-premises setup instead of cloud-based infrastructure, accurately factor in the equivalent costs.
  • Transitioning to on-premises infrastructure may necessitate a comprehensive review of existing infrastructure, potentially requiring additional efforts and budget allocation.

Inference Cost:

  • Initial inference costs will begin during training, escalating significantly during the three stages of Rollout. During steady state operations, this will be a major contributor to the on-going cost.

Data Preparation Cost:

  • Encompasses costs associated with data scientists, data analysts, and computing infrastructure (either cloud-based or on-premises).
  • Covers tasks such as cleansing, organizing, processing, and labelling data before it’s ready for training.
  • Expect a considerable time and money investment for this stage.
  • Additional costs may arise for implementing scalable and efficient data storage and data management systems.

Software Development Cost:

  • Involves costs related to building and testing applications that facilitate user interaction with the deployed GenAI solution.
  • Includes expenses for IT talent, licenses, and necessary infrastructure.

Fine-Tuning Cost:

  • Accounts for costs of personnel and infrastructure.
  • If synthetic data generation is planned using the trained model, include the related costs as well – such as personnel cost, inference cost, cloud cost etc..
  • Budget for this cost only if Fine-Tuning is part of the strategy.

Prompt Engineering Cost:

  • Allocate a budget for this cost if Prompt Engineering is chosen instead of, or in conjunction with, Fine-Tuning.
  • Primarily consists of costs associated with trained personnel.

Integration Cost:

  • This is for the cost of integrating newly built solutions with existing systems to ensure seamless user experience.
  • Involves the time and expertise of staff who manage these existing systems, even if not directly involved in GenAI implementation.
  • May necessitate changes to existing systems, requiring additional budget allocation.

Operations Cost:

  • Covers costs associated with the deployment and ongoing maintenance of the entire solution.

HBR’s Hugo Huang suggests management strategies for CEO/CIO for controlling costs. The CEO/CIO will constitute teams to carry out the GEN Ai adoption. While reviewing and signing-off the budgets / costs, the understanding of where the different costs occur will help.

Maryam Ashoori, in her article gets to the nuts-bolts of how to calculate different costs. A combination of similar approaches will help the teams constituted by CEO/CIO to make sure that costs well budgeted and under control.

To maintain competitive advantage, AI adoption in the enterprise is an unavoidable event. Cost estimation and control is one of the key pillars upon which a successful adoption of AI rests.

References:

  1. What CEOs Need to Know About the Costs of Adopting GenAI by Hugo Huang
    https://hbr.org/2023/11/what-ceos-need-to-know-about-the-costs-of-adopting-genai?ab=HP-latest-image-2
  2. Decoding the True Cost of Generative AI for Your Enterprise by Maryam Ashoori
    https://www.linkedin.com/pulse/decoding-true-cost-generative-ai-your-enterprise-maryam-ashoori-phd/

The ‘Ops’ in the GenAI World

The world of AI and its operational cousins can feel like an alphabet soup: AIOps, MLOps, DataOps, and now, GenAIOps. The key lies in understanding their distinct roles and how they can collaborate to deliver full potential of your Gen AI adoption and data investments.

Definitions

AIOps, which stands for Artificial Intelligence for IT Operations, is a rapidly evolving field that aims to leverage AI and machine learning to automate and optimize various tasks within IT operations.

MLOps, is a set of practices and tools that bring DevOps principles to the world of machine learning. It aims to automate and streamline the development, deployment, and maintenance of machine learning models in production.

DataOps, is essentially a set of practices, processes, and technologies that aim to improve the management and delivery of data products and applications. It borrows heavily from the DevOps methodology, applies it to the world of data.

GenAIOps, is the emerging field that applies the principles of AIOps, DataOps and MLOps to the specific challenges of managing and optimizing Generative AI systems.

Key Activities and Benefits

The table below captures the key objectives, activities and benefits of these ‘Ops’ areas.

Area Key Objectives Main Activities Benefits
AIOps Optimize AI infrastructure and operations ·    Automate manual tasks (incident detection, root cause analysis, remediation)
·    Improve monitoring and analytics (AI-powered analysis of IT data)
·    Proactive prediction and prevention (issue prediction from historical data)
·    Enhance collaboration and decision-making (unified platform for IT teams)
·    Reduced downtime and costs
·    Improved AI performance
·    Faster problem resolution
·    More informed decision-making
MLOps Ensure efficient and reliable ML lifecycle ·    Automate ML pipeline (data pre-processing, training, deployment, monitoring)
·    Foster collaboration and communication (break down silos between teams)
·    Implement governance and security (compliance, ethical guidelines)
·    Faster time to market for ML models
·    Increased model accuracy and reliability
·    Improved model governance and compliance
·    Reduced risk of model failures
DataOps Improve data quality, availability, and accessibility ·    Automate data pipelines (ingestion, transformation, delivery)
·    Implement data governance and quality control (standardization, validation)
·    Monitor data quality and lineage
·    Improved data quality and trust
·    Better decision-making
·    Increased data accessibility and efficiency
·    Reduced data-related errors
GenAIOps Streamline and automate generative AI development and operations ·    Automate Generative AI pipelines (data preparation, training, output generation)
·    Monitor and manage Generative AI models (bias detection, remediation)
·    Implement governance and safety controls (bias mitigation, explainability tools)
·    Optimize resource allocation and cost management
·    Facilitate collaboration and communication
·    Faster development and deployment of generative AI applications
·    Improved innovation and creativity
·    Efficient management of generative AI models
·    Reduced risk of bias and ethical issues in generative AI outputs

Comparative view

Because implementing GenAIOps would mostly require deploying MLOPs, DataOps and AIOPs also, it would be worthwhile to analyze distinctions and overlaps.

AIOps and MLOps

One uses AI, while the other applies DevOps principles.

AIOps:

  • Focus: Applying AI to improve IT operations as a whole.
  • Goals: Automate tasks, improve monitoring and analytics, predict and prevent issues, enhance collaboration and decision-making.
  • Examples: Using AI to detect network anomalies, automate incident resolution, or predict server failures.

MLOps:

  • Focus: Operationalizing and managing machine learning models effectively.
  • Goals: Automate the ML pipeline, deploy and monitor models in production, optimize performance, and ensure reliable and scalable operation.
  • Examples: Automating data pre-processing for model training, continuously monitoring model accuracy and bias, or automatically rolling back models when performance degrades.

Key Differences:

  • Scope: AIOps is broader, focusing on all aspects of IT operations, while MLOps is specifically about managing ML models.
  • Approach: AIOps uses AI as a tool for existing IT processes, while MLOps aims to fundamentally change how ML models are developed, deployed, and managed.
  • Impact: AIOps can improve the efficiency and reliability of IT operations, while MLOps can accelerate the adoption and impact of ML models in real-world applications.

Overlap and Synergy:

  • There is some overlap between AIOps and MLOps, especially in areas like monitoring and automation.
  • They can work together synergistically: AIOps can provide data and insights to improve MLOps, and MLOps can develop AI-powered tools that benefit AIOps.

So, while their core goals differ, AIOps and MLOps are complementary approaches that can together drive AI adoption and optimize both IT operations and ML models.

MLOps and GenAIOps

In the sense of focusing on operationalizing models, MLOps and GenAIOps share a similar core objective. Both aim to streamline the processes involved in deploying, monitoring, and maintaining models in production effectively. However, there are some key differences that distinguish them:

Type of models:

  • MLOps: Primarily focuses on managing traditional machine learning models used for tasks like classification, regression, or forecasting.
  • GenAIOps: Specifically deals with operationalizing Generative AI models capable of generating creative outputs like text, images, code, or music.

Challenges and complexities:

  • MLOps: Faces challenges like data quality and bias, model performance monitoring, and resource optimization.
  • GenAIOps: Grapples with additional complexities due to the unique nature of Generative AI, including:
    • Data diversity and bias: Ensuring diversity and mitigating bias in training data, as Generative AI models are particularly sensitive to these issues.
    • Explainability and interpretability: Providing tools and techniques to understand how Generative AI models make decisions and interpret their outputs, both for developers and users.
    • Ethical and regulatory considerations: Addressing ethical concerns and complying with relevant regulations surrounding Generative AI applications.

Tools and techniques:

  • MLOps: Tools for automating data pipelines, deploying models, monitoring performance, and managing resources might be sufficient.
  • GenAIOps: May require specialized tools and techniques tailored to address the unique challenges of Generative AI, such as:
    • Bias detection and mitigation tools: To identify and address potential biases in training data and model outputs.
    • Explainability frameworks: To facilitate understanding of how Generative AI models make decisions.
    • Content filtering and moderation tools: To ensure safe and responsible generation of outputs.

While both MLOps and GenAIOps share the general goal of operationalizing models, the specific challenges and complexities faced by Generative AI necessitate the development of specialized tools and practices within GenAIOps.

Collaboration:

  • AIOps and GenAIOps: These fields can coexist and complement each other within an organization. AIOps focuses on broader IT operations, while GenAIOps specifically addresses the unique challenges of managing Generative AI models. They can share data and insights to improve overall AI-driven decision-making and optimization.
  • MLOps and GenAIOps: While both focus on model operationalization, GenAIOps can be considered a specialized subset of MLOps that addresses the unique needs of Generative AI models. In organizations heavily invested in Generative AI, GenAIOps practices might naturally subsume the broader MLOps practices, ensuring tailored governance and operational efficiency for these advanced models.

Integration considerations:

  • Scope and Focus: Clearly define the scope of each field within your organization to ensure alignment and avoid overlap.
  • Tooling and Infrastructure: Evaluate whether existing MLOps tools can adequately support GenAIOps requirements or if specialized tools are needed.
  • Skill Sets: Foster cross-team collaboration and knowledge sharing to bridge gaps between different AIOps, MLOps, and GenAIOps teams. This is one of the most important considerations to keep operations cost down.

Summary and Future Outlook

  • AIOps and GenAIOps can coexist and collaborate for broader IT optimization and responsible Generative AI management.
  • GenAIOps can subsume MLOps practices in organizations with a strong focus on Generative AI, ensuring tailored governance and efficiency.
  • This convergence could lead to more comprehensive platforms and tools that address the entire AI lifecycle, from development to deployment, monitoring, and maintenance.

References

  1. What is AIOps? : https://www.ibm.com/topics/aiops
  2. What is MLOps and Why It Matters: https://www.databricks.com/glossary/mlops
  3. GenAIOps: Evolving the MLOps Framework: https://towardsdatascience.com/genaiops-evolving-the-mlops-framework-b0012f936379
  4. AI Project Management: The Roadmap to Success with AI, DataOps, and GenAIOps: https://www.techopedia.com/ai-project-management-the-roadmap-to-success-with-mlops-dataops-and-genaiops

Generative AI in Product Genealogy Solution in Manufacturing

The demand for guaranteed product quality through comprehensive traceability is rapidly spreading beyond the pharmaceutical industry and into other manufacturing sectors. This rising demand stems from both increased customer awareness and stricter regulations. To address this need, manufacturers are turning to Product Traceability, also known as Product Genealogy, solutions.

Efforts over the past 4-5 years, even by Micro, Small and Medium Enterprises (MSMEs), to embrace digitalization and align with Industry 4.0 principles have paved the way for the deployment of hybrid Product Genealogy solutions. These solutions combine digital technology with human interventions. However, the emergence of readily available and deployable Generative AI models presents a promising opportunity to further eliminate human intervention, ultimately boosting manufacturing profitability.

To illustrate this potential, let’s consider the Long Steel Products Industry. This industry encompasses a diverse range of products, from reinforcement bars (rebars) used in civil construction with less stringent requirements, to specialized steel rods employed in demanding applications like automobiles and aviation.

The diagram below gives a high-level view of the manufacturing process stages.

Beyond core process automation done under Industry 3.0, steel manufacturers have embraced digitalization through Visualization Solutions. These solutions leverage existing sensors, supplemented by new ones and IIoT (Industrial IoT) technology, to transform data collection. They gather data from the production floor, send it to cloud hosted Visualization platforms, and process it into meaningful textual and graphical insights presented through dashboards. This empowers data-driven decision-making by providing valuable management insights, significantly improving efficiency, accuracy, and decision-making speed, ultimately benefiting the bottom line.

However, human involvement remains high in decision-making, defining actions, and implementing them on the production floor. This is where Generative AI, a disruptive technology, enters the scene.

Imagine a production process equipped with a pre-existing Visualization solution, constantly collecting data from diverse sensors throughout the production cycle. Let’s explore how Generative AI adds value in such a plant, specifically focusing on long steel products where each batch run (“campaign”) typically produces rods/bars with distinct chemical compositions (e.g., 8mm with one composition, 14mm with another).

Insights and Anomalies

  • Real-time data from diverse production sensors (scrap sorting, melting, rolling, cooling) feeds into a Time-Series database. This multi-modal telemetry data, like temperature, pressure, chemical composition, vibration, visual information etc., fuels a Visualization platform generating predefined dashboards and alerts. With training and continuous learning, Generative AI models analyse this data in real-time, identifying patterns and deviations not envisaged by predefined expectations. These AI-inferred insights, alongside predefined alerts, highlight potential issues like unexpected temperature spikes, unusual pressure fluctuations, or off-spec chemical composition.
  • If trained on historical and ongoing ‘action taken’ data, the AI model can generate partial or complete configurations (“recipes”) for uploading to PLCs (Programmable Logic Controllers). These recipes, tailored for specific campaigns based on desired results, adjust equipment settings like temperature, cooling water flow, and conveyor speed. The PLCs then transmit these configs to equipment controllers, optimizing production for each unique campaign.
  • Individual bars can be identified within a campaign using QR code stickers, engraved codes, or even software-generated IDs based on sensor data. This ID allows the AI to link process and chemical data (known as ‘Heat Chemistry’) to each specific bar. This information helps identify non-conforming products early, preventing them from reaching final stages. For example, non-conforming bars can be automatically separated at the cooling bed before reaching bundling stations.
  • Customers can access detailed information about the specific processes and materials used to create their steel products, including actual chemistry and physical quality data points. This transparency builds trust in the product’s quality and origin, differentiating your brand in the market.

Enriched Data Records

  • The AI model’s capabilities extend beyond mere interpretation of raw sensor data—it actively enriches it with additional information. This enrichment process encompasses:
    • Derived features: AI extracts meaningful variables from sensor data, such as calculating cooling rates from temperature readings or estimating carbon content from spectral analysis.
    • Contextualization: AI seamlessly links data points to specific production stages, equipment used, and even raw material batch information, providing a holistic view of the manufacturing process.
    • Anomaly flagging: AI vigilantly marks data points that deviate from expected values, making critical events easily identifiable and facilitating prompt corrective actions. This also helps in continuous learning by the AI model.
  • This enriched data forms a comprehensive digital history for each bar, providing invaluable insights that fuel process optimization and quality control initiatives.

While the aforementioned functionalities showcase Generative AI’s immediate impact on traceability, its potential extends far beyond. Trained and self-learning models pave the way for advancements like predictive maintenance, product simulation, waste forecasting, and even autonomous recipe management. However, these exciting future applications lie beyond the scope of this blog.

Despite its nascent stage in long steel product genealogy, Generative AI is already attracting significant attention from various companies and research initiatives. This growing interest underscores its immense potential to revolutionize the industry.

Challenges and Considerations

  • Data Quality and Availability: The success of AI-powered traceability hinges on accurate and complete data throughout the production process. Integrating AI with existing infrastructure and ensuring data consistency across systems pose significant challenges.
  • Privacy and Security Concerns: Sensitive data about materials, processes, and customers must be protected. Secure data storage, robust access control mechanisms, and compliance with relevant regulations are paramount.
  • Scalability and Cost-Effectiveness: Implementing AI-based solutions requires investment in hardware, software, and expert skills. Careful ROI analysis and planning are crucial to avoid budget overruns. Scaling these solutions to large facilities and complex supply chains requires thoughtful cost analysis and strategic planning.

By addressing these challenges and unlocking the power of Generative AI, manufacturers can establish robust and transparent product traceability systems. This, in turn, will lead to enhanced product quality, increased customer trust, and more sustainable practices.

GenAi & LLM: Impact on Human Jobs

I met an IT Head of a leading Manufacturing company in a social gathering. During our discussion, when he convincingly told me that current AI progress is destructive from the point of jobs done by humans and it’s going to be doomsday, I realized that many would be carrying a similar opinion, which I felt needs to be corrected.

A good starting point to understand impact of AI on jobs done by humans today is the World Economic Forum’s white paper published in September 2023 (Reference 1). It gives us a fascinating glimpse into the future of work in the era of Generative AI (GenAi) and Large Language Models (LLM). This report sheds light on the intricate dance between Generative AI and the future of employment, revealing some nuanced trends that are set to reshape the job market. Few key messages from the paper are below.

At the heart of the discussion is the distinction between jobs that are ripe for augmentation and those that face the prospect of automation. According to the report, jobs that involve routine, repetitive tasks are at a higher risk of automation. Tasks that can be easily defined and predicted might find themselves in the capable hands of AI. Think data entry, basic analysis, and other rule-based responsibilities. LLMs, with their ability to understand and generate human-like text, excel in scenarios where the tasks are well-defined and can be streamlined.

However, it’s not a doomsday scenario for human workers. In fact, the report emphasizes the idea of job augmentation rather than outright replacement. This means that while certain aspects of a job may be automated, there’s a simultaneous enhancement of human capabilities through collaboration with LLMs. It’s a symbiotic relationship where humans leverage the strengths of AI to become more efficient and dynamic in their roles. For instance, content creation, customer service, and decision-making processes could see a significant boost with the integration of LLMs.

Interestingly, the jobs that seem to thrive in this evolving landscape are the ones requiring a distinctly human touch. Roles demanding creativity, critical thinking, emotional intelligence, and nuanced communication are poised to flourish. LLMs, despite their impressive abilities, still grapple with the complexity of human emotions and the subtleties of creative expression. This places humans in a unique position to contribute in ways that machines currently cannot. But here the unique ability of LLMs to understand context, generate human-like text, and even assist in complex problem-solving, positions them as valuable tools for humans.

Imagine a future where content creation becomes a collaborative effort between human creativity and AI efficiency, or where customer service benefits from the empathetic understanding of LLMs. Decision-making processes, too, could see a paradigm shift as humans harness the analytical prowess of AI to make more informed and strategic choices.

There is also creation of new type of jobs, emerging jobs as it is called. For example, Ethics and Governance Specialists is one such emerging job.

The said paper further nicely brings together a view of job exposure by functional area and by industry groups: ranked by exposure (augmentation and automation potential) across large number of jobs to give reader a feel of what is stated above.

In essence, the report paints a picture of a future where humans and AI are not adversaries but partners in progress. The workplace becomes a dynamic arena where humans bring creativity, intuition, and emotional intelligence to the table, while LLMs contribute efficiency, data processing power, and a unique form of problem-solving. The key takeaway is one of collaboration, where the fusion of human and machine capabilities leads to a more productive, innovative, and engaging work environment. So, as we navigate this evolving landscape, it’s not about job replacement; it’s about embracing the opportunities that arise when humans and LLMs work hand in virtual hand.

 

References:

1.      Jobs of Tomorrow: Large Language Models and Jobs, September 2023. A World Economic Forum (WEF) white paper jointly authored by WEF and Accenture. https://www3.weforum.org/docs/WEF_Jobs_of_Tomorrow_Generative_AI_2023.pdf

 

 

AI Legislation: Need Urgency

Let me first wish you all a happy Navratri Festivities.  I still fondly remember the Durga Pooja days during my Indian Statistical Institute years.  However, we also need to remember we are in the midst of two wars, one in Ukraine and other in Middle East.  We wish solutions are found and further loss of life and destruction is stopped.

I came across two articles in Hindu Newspaper regarding our topic AI.  I have attached a scan of an editorial by M.K. Narayanan, well known National Security and Cyber expert. 

Few highlights are worth mentioning for all of us to ponder.

  1. There is a general agreement that latest advances in AI do pose a major threat and need to be regulated like nuclear power technologies.
  • All countries are not only “locking the gates after the horse has bolted””, but “discussing about locking the gates and deciding on the make & model of the Lock while the horse has bolted”.  Huge delays in enacting and implementing AI Legislation is flagged as a big issue.
  • Rogue nations who willfully decide not to enforce any regulations will get huge advantage over law abiding nations.
  • More than 50% of the large enterprises are sitting on “intangible” assets which are at huge risk of evaporating by non-state actors with AI powered cyber warfare.
  • Cognitive warfare using AI technologies, will destabilize governments, news media and alter the human cognition.
  • This is a new kind of war fare where state and technology companies must closely collaborate.
  • Another interesting mention of over dependence on AI and algorithms which may have caused the major intelligence failure in the latest middle east conflict.

All of these point to the same conclusion.  All countries and multi-lateral organizations such as UN, EU, African Union, G20 etc., multi-lateral military alliances like NATO etc. must move at lightning speed to understand and agree on measures to effectively control and use this great technology.  

The old classic advertisement slogan “JUST DO IT” must be the motto of all the organizations.

Similar efforts are needed by all large enterprises, large financial institutions, regulatory agencies to get ready for the scale implementation of these technologies.

Last but not the least, large technology companies need to look at this not just as a form of another innovation to help automation, but a human affecting , major disruption causing technology and spend sufficient resources in understanding and putting sufficient brakes to avoid run away type situations.

Cyber Security, Ethical auditors, risk management auditors will have huge opportunities and they have to start upskilling fast.

More later,

L Ravichandran.

To Be Or Not To Be – GPT4 Applications

Posting on behalf on L Ravichandran

I saw this talk organized by a company called Steamship on YouTube.
 
GPT-4 – How does it work, and how do I build apps with it? – CS50 Tech Talk
 


One of the key speakers talked about various categories of applications being built using GPT-4.  No 1 is the “Companionship Category of applications”.
 
He further expanded on the Companionship category such as mentor, coach,  a friend who will give you the right feedback, a friend who will always empathize with you, etc. People are using these personas to get solace and comfort by “talking” to these companions.
 
As I was seeing this video, I was really disturbed and at the same time became inquisitive. What do we humans want? Do we want to communicate with GPT Companions or Flesh & Blood real human companions? Are we settling for GPT-Companion as the current society does not support human-to-human contact and communication?
 
The large family cluster of extended families living nearby is gone as we move away into far suburbs. The number of children per family is reducing fast. Physical games are getting substituted with online virtual games; friends are very few, and even these few friends are happy with virtual communication.
 
I know this is a question for philosophers, phycologists, and social scientists to answer. I hope they seriously look at this new phenomenon and assess its impact on human society.
 
I will conclude with the famous Shakespeare dialogue “To Be or Not to Be “. “To be a human or Not to be a human” is the new question.