Skip to main content

Beyond the script: The future of digital customer service

In the past companies noticed that their customers are getting frustrated by waiting for customer service agents for simple queries. “All our agents are busy. Your call is important to us. Please wait.” became a dreaded message for customers looking for answer to their simple queries. So many companies launched Chat Bots as part digitalizing their customer service.

Chat Bots could address only one set of customers – those who had very basic queries. Chatbots are programmed to respond to specific commands or questions with predefined responses. Can’t grasp complex questions or deviate from their script. Offers generic answers based on programmed responses – Relies on pre-programmed rules and keywords. Doesn’t improve over time and requires manual updates.

Then came AI Chat Bots with a promise to divert more customers away from human agents. They are expected to be smarter and more Flexible due to utilization of Natural Language Processing (NLP) to understand intent and context. Can handle a wider range of questions and respond in a more natural way. Adapts and improves through machine learning, offering more relevant responses over time. Can tailor responses based on user history and preferences. These Chat Bots were expected to reduce response times and improving the overall customer experience.

So, what is the customer behavior after this advancement in Chat Bot technology?  A recent survey commissioned by customer experience platform CallVu throws some interesting light on it.

Figure 1: Source: CallVu – AI in Customer Service Survey Mar 2024

A significant percentage of people, 81%, indicated readiness to wait, for varying duration, to talk with a live agent. 16% people were ready to wait for more than 10 min to talk to live agents! Only 14% seems to be ready to go straight away for interacting with Chat Bots.

Now combine the above survey findings with the findings below, in the same report.

Figure 2 Source: CallVu – AI in Customer Service Survey Mar 2024

As CallVu found out, people rated live agents much higher than AI Assistants on most dimensions. Slight rating advantages for AI assistants on speed, patience and accuracy.

The interesting part to note is customers prefer talking to live agents for venting out frustration, indicating a role beyond just problem resolution – exhibition of empathy. However, it is again clear that customers prefer interacting with Chat Bots for simple queries with accurate answers. The Chat Bot interaction also seems to give a feeling of ‘being patient’ to the customers.

Does this mean there is no road ahead for digitalization of Customer Services interactions using Chat Bots? Few other surveys do show data to the contrary. The sixth edition of Salesforce’s ‘State of Connected Customer’ report clearly brings out the fact that 61% of the customers still would prefer to engage a self-service to resolve an issue. But there is a warning from 68% people that if there is a bad experience then they will never use self-service again for that company. With these findings, Salesforce makes a case for an opportunity to further improve the experience through a more intelligent, autonomous agents powered by Generative AI.

If we look at what Salesforce promises through its ‘Einstein’ Autonomous AI Service Agent, it gives a peek into what to expect from such agents when other Independent Software Vendors start delivering similar products into the market.

Sophisticated reasoning and natural responses: Fluid, intelligent conversations; coupled with logical inferences and connecting various pieces of information from company’s various data sources.

24/7 swift resolutions driven by trusted data: Grounded in company’s trusted business data.

Built-in guardrails: Including protection of PII (Personally Identifiable Information).

Cross-channel and multimodal: Self-service portals, WhatsApp, Apple Messages for Business, Facebook Messenger, SMS and so on.

Seamless handoffs to human agents: Seamless handoff to Human Agent, if needed, with full context of conversation. For example, if something needs to be handled outside defined policy.

Only time will tell whether this will move the needle in the right direction for customers to start relying on digital means more and more to get their service requests resolved. In the near future, we might see a hybrid environment where all three types coexist. Traditional chatbots can handle simple tasks, while AI chatbots manage complex interactions. Autonomous AI chatbots can take on more advanced roles, working alongside humans.

GenAI Adoption – Challenges in Manufacturing Enterprise

While discussions have been ongoing regarding the use of fine-tuned Large Language Models (LLMs) for specific enterprise needs, the high cost associated with cloud-based LLMs, including subscription fees and API usage charges, is becoming increasingly evident. This cost barrier has been a major hurdle for many enterprises seeking to transition GenAI-powered solutions from pilot programs to production environments.

Conversely, Small Language Models (SLMs) appear to be a more suitable option for businesses seeking specialized applications due to their lower cost and resource requirements. Enterprises typically operate with constrained budgets. Piero Molino (Chief Scientific Officer & Cofounder of Predibase, creator of Ludwig, formerly of Uber’s AI Lab), predicts that SLMs will be a major driver of enterprise adoption in 2024 due to their attractive financial proposition.

But within the enterprises sector, Manufacturing Enterprises will likely to be one of the slowest adopters of GenAI in their operations. Especially Medium and Small manufacturers. Let us explore the reasons because the combination of Industry 4.0’s data collection and connectivity with GenAI’s analytical and generative capabilities has significant potential to transform manufacturing into a more autonomous, intelligent, and efficient system.

Business Hurdles

Cost

The high cost of Large Language Models (LLMs) is a major hurdle for their adoption in manufacturing. LLMs require massive computing power for training and inference, making the enterprises reliant on cloud providers. However, cloud provider fees can scale rapidly with model size and usage, and vendor lock-in can be a concern.

Small Language Models (SLMs) offer a potential solution. Their lower computational footprint makes on-premises deployment a possibility. However, implementing SLM requires expertise in machine learning and LLM training, which some enterprises may lack. Hiring additional staff or finding a vendor with this expertise is an option, but maintaining an SLM on-premises can be complex and requires significant IT infrastructure.

For many manufacturing enterprises, the complexity and cost of on-premises SLM maintenance might outweigh the benefits of reduced cloud costs. This could lead them back to cloud based SLMs, landing them where they started.

Security Concerns

Security concerns around data privacy are a major hurdle for manufacturing companies considering both external vendors and cloud adoption. Usually Medium and Small manufacturers have viewed cloud with apprehension.

Change Management

Implementing Generative AI (GenAI) solutions can necessitate significant modifications to existing manufacturing software and may require changes to current processes. While change management might be straightforward for greenfield projects (entirely new systems), most implementations will be brownfield projects (upgrades to existing systems). Manufacturers are understandably hesitant to disrupt well-functioning manufacturing processes unless there’s a compelling reason. Therefore, a robust business case and a well-defined plan for minimizing disruption during change management are crucial.

Technical Hurdles

Data Challenges

GenAI models require large amounts of clean, labelled data to train effectively. Manufacturing processes can be complex and generate data that is siloed, inconsistent, or proprietary. So, unless there are existing Observability solutions which has captured sensor telemetry over period, the manufacturer cannot directly introduce a GenAI solution. Additionally, companies may be hesitant to share this data with external vendors.

Integration Complexity

Integrating GenAI solutions with existing manufacturing systems can be complex and require expertise in both AI and manufacturing technologies. Vendors may need to have experience working with similar manufacturing systems to ensure a smooth integration. Existing vendors may have to be roped in for the integration, which would incur additional cost. Integration governance could become complex.

Lack of Standardization

The field of GenAI is still evolving, and there is a lack of standardization in tools and techniques. This can make it difficult for companies to evaluate and select the right vendor for their needs.

Accuracy

SLMs are likely less susceptible to hallucination and bias compared to LLMs. SLMs are trained on a smaller amount of data, typically focused on a specific domain or task. SLMs have a simpler architecture compared to LLMs. Hence, they are less prone to situations where the model invents information or connections that aren’t there.

Data Quality Still Matters. Even with a smaller dataset, bias can still be present if the training data itself is biased. Bias, in case manufacturing systems, is about plant-shift based bias, machine life bias, role importance bias, vendor bias etc. Bias can also start building up through feedback loop from new production output.

Less Established Tools and Expertise

There are fewer established tools and frameworks specifically designed for SLMs compared to LLMs. Finding experts with experience in implementing SLM-based GenAI solutions might be more challenging.

Conclusion

What you will notice is that though there is cost reduction by using SLM instead of LLM, the challenges and hurdles remain almost the same. The hesitation that existed in Manufacturers for LLM based solutions remains for SLM based solutions. In many cases preventing the manufacturers moving from pilot to production. That hesitation needs to be tackled on priority basis to unlock the potential of SLM for the future of smart manufacturing.

Gen AI adoption: Is your budget ready?

As the adoption of Generative AI in the enterprise accelerates, one question that will be on Management’s mind: “What does AI cost?” The answer, like most things in business, is nuanced: it depends on the specific needs of the enterprise.

For a rough estimates, you can look at comparable businesses. For example, small enterprises with limited budgets might begin with AI-powered chatbots to automate customer support, freeing up existing staff for more complex tasks. But leaving at rough estimates is not a good approach.

Underestimating the importance of proper budgeting for adopting and operationalizing AI in the enterprise can be disastrous. A cautionary tale comes from cloud adoption, where unforeseen costs have triggered an exodus of businesses from cloud back to on-premises infrastructure.

Many sources, like those in Reference 1, meticulously list and explain the various cost factors involved. In a diagram here, these costs have been mapped onto different stages of Generative AI adoption. I haven’t elaborated on all stages in the diagram because some warrant their own detailed illustration.

Let us look at why a particular cost matters in the indicated stage. The assumption here is that the enterprises aim to fully implement and manage Gen AI themselves. The understanding would need little tweaking, but would hold good even when the enterprise decides to partially or completely outsources this activity.

Consultant Cost:

  • This is the cost of a consultant or consulting firm who will provide guidance and support throughout the AI adoption process.
  • In the diagram where this cost is not indicated, it is minimal compared to other costs at that stage.

Talent Cost:

  • Primarily encompasses the costs associated with reskilling current staff and hiring new talent.
  • Exercise caution, as advised by Hugo Huang, regarding the specific skills and headcount required for both AI solution implementation and ongoing maintenance.
  • Meticulous planning and budgeting are essential to prevent cost overruns.
  • While not explicitly indicated in the diagram in some stages, staff costs are assumed to be integrated within other categories such as Software Development Cost, Data Preparation Cost, and Rollout costs.

Cloud Cost:

  • Initial cloud costs will arise during the training phase, gradually scaling up to the target level during Rollout.
  • Carefully anticipate and plan for these costs, which are distributed across multiple stages.
  • If opting for an on-premises setup instead of cloud-based infrastructure, accurately factor in the equivalent costs.
  • Transitioning to on-premises infrastructure may necessitate a comprehensive review of existing infrastructure, potentially requiring additional efforts and budget allocation.

Inference Cost:

  • Initial inference costs will begin during training, escalating significantly during the three stages of Rollout. During steady state operations, this will be a major contributor to the on-going cost.

Data Preparation Cost:

  • Encompasses costs associated with data scientists, data analysts, and computing infrastructure (either cloud-based or on-premises).
  • Covers tasks such as cleansing, organizing, processing, and labelling data before it’s ready for training.
  • Expect a considerable time and money investment for this stage.
  • Additional costs may arise for implementing scalable and efficient data storage and data management systems.

Software Development Cost:

  • Involves costs related to building and testing applications that facilitate user interaction with the deployed GenAI solution.
  • Includes expenses for IT talent, licenses, and necessary infrastructure.

Fine-Tuning Cost:

  • Accounts for costs of personnel and infrastructure.
  • If synthetic data generation is planned using the trained model, include the related costs as well – such as personnel cost, inference cost, cloud cost etc..
  • Budget for this cost only if Fine-Tuning is part of the strategy.

Prompt Engineering Cost:

  • Allocate a budget for this cost if Prompt Engineering is chosen instead of, or in conjunction with, Fine-Tuning.
  • Primarily consists of costs associated with trained personnel.

Integration Cost:

  • This is for the cost of integrating newly built solutions with existing systems to ensure seamless user experience.
  • Involves the time and expertise of staff who manage these existing systems, even if not directly involved in GenAI implementation.
  • May necessitate changes to existing systems, requiring additional budget allocation.

Operations Cost:

  • Covers costs associated with the deployment and ongoing maintenance of the entire solution.

HBR’s Hugo Huang suggests management strategies for CEO/CIO for controlling costs. The CEO/CIO will constitute teams to carry out the GEN Ai adoption. While reviewing and signing-off the budgets / costs, the understanding of where the different costs occur will help.

Maryam Ashoori, in her article gets to the nuts-bolts of how to calculate different costs. A combination of similar approaches will help the teams constituted by CEO/CIO to make sure that costs well budgeted and under control.

To maintain competitive advantage, AI adoption in the enterprise is an unavoidable event. Cost estimation and control is one of the key pillars upon which a successful adoption of AI rests.

References:

  1. What CEOs Need to Know About the Costs of Adopting GenAI by Hugo Huang
    https://hbr.org/2023/11/what-ceos-need-to-know-about-the-costs-of-adopting-genai?ab=HP-latest-image-2
  2. Decoding the True Cost of Generative AI for Your Enterprise by Maryam Ashoori
    https://www.linkedin.com/pulse/decoding-true-cost-generative-ai-your-enterprise-maryam-ashoori-phd/

The ‘Ops’ in the GenAI World

The world of AI and its operational cousins can feel like an alphabet soup: AIOps, MLOps, DataOps, and now, GenAIOps. The key lies in understanding their distinct roles and how they can collaborate to deliver full potential of your Gen AI adoption and data investments.

Definitions

AIOps, which stands for Artificial Intelligence for IT Operations, is a rapidly evolving field that aims to leverage AI and machine learning to automate and optimize various tasks within IT operations.

MLOps, is a set of practices and tools that bring DevOps principles to the world of machine learning. It aims to automate and streamline the development, deployment, and maintenance of machine learning models in production.

DataOps, is essentially a set of practices, processes, and technologies that aim to improve the management and delivery of data products and applications. It borrows heavily from the DevOps methodology, applies it to the world of data.

GenAIOps, is the emerging field that applies the principles of AIOps, DataOps and MLOps to the specific challenges of managing and optimizing Generative AI systems.

Key Activities and Benefits

The table below captures the key objectives, activities and benefits of these ‘Ops’ areas.

Area Key Objectives Main Activities Benefits
AIOps Optimize AI infrastructure and operations ·    Automate manual tasks (incident detection, root cause analysis, remediation)
·    Improve monitoring and analytics (AI-powered analysis of IT data)
·    Proactive prediction and prevention (issue prediction from historical data)
·    Enhance collaboration and decision-making (unified platform for IT teams)
·    Reduced downtime and costs
·    Improved AI performance
·    Faster problem resolution
·    More informed decision-making
MLOps Ensure efficient and reliable ML lifecycle ·    Automate ML pipeline (data pre-processing, training, deployment, monitoring)
·    Foster collaboration and communication (break down silos between teams)
·    Implement governance and security (compliance, ethical guidelines)
·    Faster time to market for ML models
·    Increased model accuracy and reliability
·    Improved model governance and compliance
·    Reduced risk of model failures
DataOps Improve data quality, availability, and accessibility ·    Automate data pipelines (ingestion, transformation, delivery)
·    Implement data governance and quality control (standardization, validation)
·    Monitor data quality and lineage
·    Improved data quality and trust
·    Better decision-making
·    Increased data accessibility and efficiency
·    Reduced data-related errors
GenAIOps Streamline and automate generative AI development and operations ·    Automate Generative AI pipelines (data preparation, training, output generation)
·    Monitor and manage Generative AI models (bias detection, remediation)
·    Implement governance and safety controls (bias mitigation, explainability tools)
·    Optimize resource allocation and cost management
·    Facilitate collaboration and communication
·    Faster development and deployment of generative AI applications
·    Improved innovation and creativity
·    Efficient management of generative AI models
·    Reduced risk of bias and ethical issues in generative AI outputs

Comparative view

Because implementing GenAIOps would mostly require deploying MLOPs, DataOps and AIOPs also, it would be worthwhile to analyze distinctions and overlaps.

AIOps and MLOps

One uses AI, while the other applies DevOps principles.

AIOps:

  • Focus: Applying AI to improve IT operations as a whole.
  • Goals: Automate tasks, improve monitoring and analytics, predict and prevent issues, enhance collaboration and decision-making.
  • Examples: Using AI to detect network anomalies, automate incident resolution, or predict server failures.

MLOps:

  • Focus: Operationalizing and managing machine learning models effectively.
  • Goals: Automate the ML pipeline, deploy and monitor models in production, optimize performance, and ensure reliable and scalable operation.
  • Examples: Automating data pre-processing for model training, continuously monitoring model accuracy and bias, or automatically rolling back models when performance degrades.

Key Differences:

  • Scope: AIOps is broader, focusing on all aspects of IT operations, while MLOps is specifically about managing ML models.
  • Approach: AIOps uses AI as a tool for existing IT processes, while MLOps aims to fundamentally change how ML models are developed, deployed, and managed.
  • Impact: AIOps can improve the efficiency and reliability of IT operations, while MLOps can accelerate the adoption and impact of ML models in real-world applications.

Overlap and Synergy:

  • There is some overlap between AIOps and MLOps, especially in areas like monitoring and automation.
  • They can work together synergistically: AIOps can provide data and insights to improve MLOps, and MLOps can develop AI-powered tools that benefit AIOps.

So, while their core goals differ, AIOps and MLOps are complementary approaches that can together drive AI adoption and optimize both IT operations and ML models.

MLOps and GenAIOps

In the sense of focusing on operationalizing models, MLOps and GenAIOps share a similar core objective. Both aim to streamline the processes involved in deploying, monitoring, and maintaining models in production effectively. However, there are some key differences that distinguish them:

Type of models:

  • MLOps: Primarily focuses on managing traditional machine learning models used for tasks like classification, regression, or forecasting.
  • GenAIOps: Specifically deals with operationalizing Generative AI models capable of generating creative outputs like text, images, code, or music.

Challenges and complexities:

  • MLOps: Faces challenges like data quality and bias, model performance monitoring, and resource optimization.
  • GenAIOps: Grapples with additional complexities due to the unique nature of Generative AI, including:
    • Data diversity and bias: Ensuring diversity and mitigating bias in training data, as Generative AI models are particularly sensitive to these issues.
    • Explainability and interpretability: Providing tools and techniques to understand how Generative AI models make decisions and interpret their outputs, both for developers and users.
    • Ethical and regulatory considerations: Addressing ethical concerns and complying with relevant regulations surrounding Generative AI applications.

Tools and techniques:

  • MLOps: Tools for automating data pipelines, deploying models, monitoring performance, and managing resources might be sufficient.
  • GenAIOps: May require specialized tools and techniques tailored to address the unique challenges of Generative AI, such as:
    • Bias detection and mitigation tools: To identify and address potential biases in training data and model outputs.
    • Explainability frameworks: To facilitate understanding of how Generative AI models make decisions.
    • Content filtering and moderation tools: To ensure safe and responsible generation of outputs.

While both MLOps and GenAIOps share the general goal of operationalizing models, the specific challenges and complexities faced by Generative AI necessitate the development of specialized tools and practices within GenAIOps.

Collaboration:

  • AIOps and GenAIOps: These fields can coexist and complement each other within an organization. AIOps focuses on broader IT operations, while GenAIOps specifically addresses the unique challenges of managing Generative AI models. They can share data and insights to improve overall AI-driven decision-making and optimization.
  • MLOps and GenAIOps: While both focus on model operationalization, GenAIOps can be considered a specialized subset of MLOps that addresses the unique needs of Generative AI models. In organizations heavily invested in Generative AI, GenAIOps practices might naturally subsume the broader MLOps practices, ensuring tailored governance and operational efficiency for these advanced models.

Integration considerations:

  • Scope and Focus: Clearly define the scope of each field within your organization to ensure alignment and avoid overlap.
  • Tooling and Infrastructure: Evaluate whether existing MLOps tools can adequately support GenAIOps requirements or if specialized tools are needed.
  • Skill Sets: Foster cross-team collaboration and knowledge sharing to bridge gaps between different AIOps, MLOps, and GenAIOps teams. This is one of the most important considerations to keep operations cost down.

Summary and Future Outlook

  • AIOps and GenAIOps can coexist and collaborate for broader IT optimization and responsible Generative AI management.
  • GenAIOps can subsume MLOps practices in organizations with a strong focus on Generative AI, ensuring tailored governance and efficiency.
  • This convergence could lead to more comprehensive platforms and tools that address the entire AI lifecycle, from development to deployment, monitoring, and maintenance.

References

  1. What is AIOps? : https://www.ibm.com/topics/aiops
  2. What is MLOps and Why It Matters: https://www.databricks.com/glossary/mlops
  3. GenAIOps: Evolving the MLOps Framework: https://towardsdatascience.com/genaiops-evolving-the-mlops-framework-b0012f936379
  4. AI Project Management: The Roadmap to Success with AI, DataOps, and GenAIOps: https://www.techopedia.com/ai-project-management-the-roadmap-to-success-with-mlops-dataops-and-genaiops

Generative AI in Product Genealogy Solution in Manufacturing

The demand for guaranteed product quality through comprehensive traceability is rapidly spreading beyond the pharmaceutical industry and into other manufacturing sectors. This rising demand stems from both increased customer awareness and stricter regulations. To address this need, manufacturers are turning to Product Traceability, also known as Product Genealogy, solutions.

Efforts over the past 4-5 years, even by Micro, Small and Medium Enterprises (MSMEs), to embrace digitalization and align with Industry 4.0 principles have paved the way for the deployment of hybrid Product Genealogy solutions. These solutions combine digital technology with human interventions. However, the emergence of readily available and deployable Generative AI models presents a promising opportunity to further eliminate human intervention, ultimately boosting manufacturing profitability.

To illustrate this potential, let’s consider the Long Steel Products Industry. This industry encompasses a diverse range of products, from reinforcement bars (rebars) used in civil construction with less stringent requirements, to specialized steel rods employed in demanding applications like automobiles and aviation.

The diagram below gives a high-level view of the manufacturing process stages.

Beyond core process automation done under Industry 3.0, steel manufacturers have embraced digitalization through Visualization Solutions. These solutions leverage existing sensors, supplemented by new ones and IIoT (Industrial IoT) technology, to transform data collection. They gather data from the production floor, send it to cloud hosted Visualization platforms, and process it into meaningful textual and graphical insights presented through dashboards. This empowers data-driven decision-making by providing valuable management insights, significantly improving efficiency, accuracy, and decision-making speed, ultimately benefiting the bottom line.

However, human involvement remains high in decision-making, defining actions, and implementing them on the production floor. This is where Generative AI, a disruptive technology, enters the scene.

Imagine a production process equipped with a pre-existing Visualization solution, constantly collecting data from diverse sensors throughout the production cycle. Let’s explore how Generative AI adds value in such a plant, specifically focusing on long steel products where each batch run (“campaign”) typically produces rods/bars with distinct chemical compositions (e.g., 8mm with one composition, 14mm with another).

Insights and Anomalies

  • Real-time data from diverse production sensors (scrap sorting, melting, rolling, cooling) feeds into a Time-Series database. This multi-modal telemetry data, like temperature, pressure, chemical composition, vibration, visual information etc., fuels a Visualization platform generating predefined dashboards and alerts. With training and continuous learning, Generative AI models analyse this data in real-time, identifying patterns and deviations not envisaged by predefined expectations. These AI-inferred insights, alongside predefined alerts, highlight potential issues like unexpected temperature spikes, unusual pressure fluctuations, or off-spec chemical composition.
  • If trained on historical and ongoing ‘action taken’ data, the AI model can generate partial or complete configurations (“recipes”) for uploading to PLCs (Programmable Logic Controllers). These recipes, tailored for specific campaigns based on desired results, adjust equipment settings like temperature, cooling water flow, and conveyor speed. The PLCs then transmit these configs to equipment controllers, optimizing production for each unique campaign.
  • Individual bars can be identified within a campaign using QR code stickers, engraved codes, or even software-generated IDs based on sensor data. This ID allows the AI to link process and chemical data (known as ‘Heat Chemistry’) to each specific bar. This information helps identify non-conforming products early, preventing them from reaching final stages. For example, non-conforming bars can be automatically separated at the cooling bed before reaching bundling stations.
  • Customers can access detailed information about the specific processes and materials used to create their steel products, including actual chemistry and physical quality data points. This transparency builds trust in the product’s quality and origin, differentiating your brand in the market.

Enriched Data Records

  • The AI model’s capabilities extend beyond mere interpretation of raw sensor data—it actively enriches it with additional information. This enrichment process encompasses:
    • Derived features: AI extracts meaningful variables from sensor data, such as calculating cooling rates from temperature readings or estimating carbon content from spectral analysis.
    • Contextualization: AI seamlessly links data points to specific production stages, equipment used, and even raw material batch information, providing a holistic view of the manufacturing process.
    • Anomaly flagging: AI vigilantly marks data points that deviate from expected values, making critical events easily identifiable and facilitating prompt corrective actions. This also helps in continuous learning by the AI model.
  • This enriched data forms a comprehensive digital history for each bar, providing invaluable insights that fuel process optimization and quality control initiatives.

While the aforementioned functionalities showcase Generative AI’s immediate impact on traceability, its potential extends far beyond. Trained and self-learning models pave the way for advancements like predictive maintenance, product simulation, waste forecasting, and even autonomous recipe management. However, these exciting future applications lie beyond the scope of this blog.

Despite its nascent stage in long steel product genealogy, Generative AI is already attracting significant attention from various companies and research initiatives. This growing interest underscores its immense potential to revolutionize the industry.

Challenges and Considerations

  • Data Quality and Availability: The success of AI-powered traceability hinges on accurate and complete data throughout the production process. Integrating AI with existing infrastructure and ensuring data consistency across systems pose significant challenges.
  • Privacy and Security Concerns: Sensitive data about materials, processes, and customers must be protected. Secure data storage, robust access control mechanisms, and compliance with relevant regulations are paramount.
  • Scalability and Cost-Effectiveness: Implementing AI-based solutions requires investment in hardware, software, and expert skills. Careful ROI analysis and planning are crucial to avoid budget overruns. Scaling these solutions to large facilities and complex supply chains requires thoughtful cost analysis and strategic planning.

By addressing these challenges and unlocking the power of Generative AI, manufacturers can establish robust and transparent product traceability systems. This, in turn, will lead to enhanced product quality, increased customer trust, and more sustainable practices.

GenAi & LLM: Impact on Human Jobs

I met an IT Head of a leading Manufacturing company in a social gathering. During our discussion, when he convincingly told me that current AI progress is destructive from the point of jobs done by humans and it’s going to be doomsday, I realized that many would be carrying a similar opinion, which I felt needs to be corrected.

A good starting point to understand impact of AI on jobs done by humans today is the World Economic Forum’s white paper published in September 2023 (Reference 1). It gives us a fascinating glimpse into the future of work in the era of Generative AI (GenAi) and Large Language Models (LLM). This report sheds light on the intricate dance between Generative AI and the future of employment, revealing some nuanced trends that are set to reshape the job market. Few key messages from the paper are below.

At the heart of the discussion is the distinction between jobs that are ripe for augmentation and those that face the prospect of automation. According to the report, jobs that involve routine, repetitive tasks are at a higher risk of automation. Tasks that can be easily defined and predicted might find themselves in the capable hands of AI. Think data entry, basic analysis, and other rule-based responsibilities. LLMs, with their ability to understand and generate human-like text, excel in scenarios where the tasks are well-defined and can be streamlined.

However, it’s not a doomsday scenario for human workers. In fact, the report emphasizes the idea of job augmentation rather than outright replacement. This means that while certain aspects of a job may be automated, there’s a simultaneous enhancement of human capabilities through collaboration with LLMs. It’s a symbiotic relationship where humans leverage the strengths of AI to become more efficient and dynamic in their roles. For instance, content creation, customer service, and decision-making processes could see a significant boost with the integration of LLMs.

Interestingly, the jobs that seem to thrive in this evolving landscape are the ones requiring a distinctly human touch. Roles demanding creativity, critical thinking, emotional intelligence, and nuanced communication are poised to flourish. LLMs, despite their impressive abilities, still grapple with the complexity of human emotions and the subtleties of creative expression. This places humans in a unique position to contribute in ways that machines currently cannot. But here the unique ability of LLMs to understand context, generate human-like text, and even assist in complex problem-solving, positions them as valuable tools for humans.

Imagine a future where content creation becomes a collaborative effort between human creativity and AI efficiency, or where customer service benefits from the empathetic understanding of LLMs. Decision-making processes, too, could see a paradigm shift as humans harness the analytical prowess of AI to make more informed and strategic choices.

There is also creation of new type of jobs, emerging jobs as it is called. For example, Ethics and Governance Specialists is one such emerging job.

The said paper further nicely brings together a view of job exposure by functional area and by industry groups: ranked by exposure (augmentation and automation potential) across large number of jobs to give reader a feel of what is stated above.

In essence, the report paints a picture of a future where humans and AI are not adversaries but partners in progress. The workplace becomes a dynamic arena where humans bring creativity, intuition, and emotional intelligence to the table, while LLMs contribute efficiency, data processing power, and a unique form of problem-solving. The key takeaway is one of collaboration, where the fusion of human and machine capabilities leads to a more productive, innovative, and engaging work environment. So, as we navigate this evolving landscape, it’s not about job replacement; it’s about embracing the opportunities that arise when humans and LLMs work hand in virtual hand.

 

References:

1.      Jobs of Tomorrow: Large Language Models and Jobs, September 2023. A World Economic Forum (WEF) white paper jointly authored by WEF and Accenture. https://www3.weforum.org/docs/WEF_Jobs_of_Tomorrow_Generative_AI_2023.pdf

 

 

Domain and LLM

I am in total agreement with Morgan Zimmerman, Dassault Systems quote in TOI today.  Every industry has its own terminologies, concepts, names, words i.e Industry Language. He says even a simple looking word like “Certification” have different meanings in Aerospace vs life sciences.  He recommends use of Industry specific language and your own company specific language for getting significant benefit out of LLMs. This will also reduce hallucinations and misunderstanding.

This is in line with @AiThoughts.Org thoughts on Domain and company specific information on top of general data used by all LLMs.  Like they say in Real Estate, the 3 most important things in any real estate buy decision is “Location, Location and Location”.  We need 3 things to make LLMs work for the enterprise.  “Domain, Domain and Domain”.   Many of us may recall a very successful Bill Clinton Presidential campaign slogan. “The economy, Stupid”.   We can say “The domain, Stupid” as the slogan to make LLMs useful for the enterprises.

But the million-dollar question is how much it is going to cost for the learning updates using your Domain and company data?  EY published a cost of $1.4 Billion which very few can afford.  We need much less expensive solutions for large scale implementation of LLMs.

Solicit your thoughts. #LLM #aiml #Aiethics #Aiforindustry

L Ravichandran

AI Legislation: Need Urgency

Let me first wish you all a happy Navratri Festivities.  I still fondly remember the Durga Pooja days during my Indian Statistical Institute years.  However, we also need to remember we are in the midst of two wars, one in Ukraine and other in Middle East.  We wish solutions are found and further loss of life and destruction is stopped.

I came across two articles in Hindu Newspaper regarding our topic AI.  I have attached a scan of an editorial by M.K. Narayanan, well known National Security and Cyber expert. 

Few highlights are worth mentioning for all of us to ponder.

  1. There is a general agreement that latest advances in AI do pose a major threat and need to be regulated like nuclear power technologies.
  • All countries are not only “locking the gates after the horse has bolted””, but “discussing about locking the gates and deciding on the make & model of the Lock while the horse has bolted”.  Huge delays in enacting and implementing AI Legislation is flagged as a big issue.
  • Rogue nations who willfully decide not to enforce any regulations will get huge advantage over law abiding nations.
  • More than 50% of the large enterprises are sitting on “intangible” assets which are at huge risk of evaporating by non-state actors with AI powered cyber warfare.
  • Cognitive warfare using AI technologies, will destabilize governments, news media and alter the human cognition.
  • This is a new kind of war fare where state and technology companies must closely collaborate.
  • Another interesting mention of over dependence on AI and algorithms which may have caused the major intelligence failure in the latest middle east conflict.

All of these point to the same conclusion.  All countries and multi-lateral organizations such as UN, EU, African Union, G20 etc., multi-lateral military alliances like NATO etc. must move at lightning speed to understand and agree on measures to effectively control and use this great technology.  

The old classic advertisement slogan “JUST DO IT” must be the motto of all the organizations.

Similar efforts are needed by all large enterprises, large financial institutions, regulatory agencies to get ready for the scale implementation of these technologies.

Last but not the least, large technology companies need to look at this not just as a form of another innovation to help automation, but a human affecting , major disruption causing technology and spend sufficient resources in understanding and putting sufficient brakes to avoid run away type situations.

Cyber Security, Ethical auditors, risk management auditors will have huge opportunities and they have to start upskilling fast.

More later,

L Ravichandran.

AI and Law

The Public Domain is full of initiatives by many Law Universities, large law firms, and various government departments on the topic of “AI and Law “. I was happy to see a news article a few days ago about the Indian Consumer grievances cell thinking about using AI to clear a large number of pending cases. They have had some success in streamlining processes and making it all digital but they felt that the sheer large volume of pending cases needs AI-type intervention.  I have already talked about the huge volume of civil cases pending in lower courts in India and some cases taking even 20 years to get final judgment.  As the saying goes “Justice delayed is Justice denied”, it is imperative that we find solutions to this huge backlog problem.

All discussions are centred around two broad areas: –

1.      Legal Research and development of customer’s case by Law firms.  Basically, core work of both junior and senior law associates and partners.

2.      Assisting judges or even rendering judgment on their own by AI models to reduce backlog and speedy justice. 

Lots of interesting discussions happening on (1). Law research, looking into archives, similar judgments, precedence’s, etc. seem to be a no-brainer.  Huge advances in automation have been already done and this will increase multi-fold by these Law purpose-built language models.  What will happen to junior law associates is an interesting question. Can they use better research and develop actual arguments and superior case brief for their clients and take the load off senior associates who in turn can focus more on customer interactions?  I found discussions on the model analysing judges’ earlier judgments and making the argument briefs customized per judge, fascinating.  

The no (2) item needs lot of discussions.   All democratic countries jurisprudence is based on these 3 fundamental principles.

  1. Every citizen will have their “day in the court” to present their case to an impartial judge.
  2. Every citizen will have a right to a competent counsel with a provision of public defenders given free to the citizens.
  3. Every witness can be cross examined by the other party without any restrictions.

On the one hand, we have these great jurisprudence principles.  On the other hand, we have huge backlogs and delays. 

How much citizens are willing to give up some of the basic principles to get speedy justice? 

Can we give up the principle of “my day in Court” and let only written briefs submitted to the court to be used for final judgement? This will mean witness statements in briefs will not be cross examined or questioned.

Can we give up the presence of a human judge who will read the briefs on both sides and make a judgement and let an AI Model read both the briefs and pronounce the judgement?

Even if citizens are willing to give up these principles, does the existing law of the land allow this?   It may require changes to law and in some countries even changes to their constitution to allow for this new AI jurisprudence.

Do we differentiate between civil cases and criminal cases separately and find different solutions?  Criminal cases will involve human liberty issues such as imprisonment and will need a whole set of different benchmarks.

What about changes to appeal process if you do not like lower court judgment?   I presume we will need human judges to review the judgements given by AI Models. It is very difficult for us to accept higher court AI model, reviewing and correcting a lower court AI model’s original judgement.

The biggest hurdle is going to be us, the citizens.  In any legal case involving two parties, one party always and in many cases both parties will be unhappy with any judgement.  No losing party in any civil case is going to be happy that they lost as per some sub clause in some law text. In many cases, even winning parties may not be happy with the award amount.  In this kind of scenario, how do you expect citizens to accept an instantaneous verdict after both parties submit their briefs?  This will be a great human change management issue.

Even if we come out with some solutions to these complex legal and people problems, one technical challenge still remains a big hurdle.  With the release of many large language models and APIs, many projects are happening to train these LLMs on specific domain. A few days ago, we saw a press release by EY about their domain-specific model developed with an investment of US$1.4 Billion.  Bloomberg announced a BloombergGPT, their own 50-billion parameters language model purpose-built for finance. Who will bell the cat for the Law domain? Who will invest large sums of $$s and create a Legal AI Model for each country? Until this model is available for general use, many of the things we discussed will not be possible.

To conclude, there are huge opportunities to get business value out of the new AI technology in the Law and Justice Domain. However, technical, legal and people issues must be understood, addressed and resolved before any large-scale implementation.

More Later. Like to hear your thoughts.

L Ravichandran

EU AI Regulations Update

I have written some time back about EU AI Act draft circulation.  After more than 2 years, there is some more movement in making this a EU Law.  In June 2023,  the EU Parliament adapted the draft and a set of negotiating principles and the next step of discussions with member countries has started.  The EU officials are confident that this process will be completed by end of 2023 and this will become an EU law soon.  Like the old Hindi proverb “ Bhagawan Ghar mein Dher hain Andher Nahin”. Or “In God’s scheme of things, there may be delays but never darkness”.  EU has taken the first step and if this becomes a law by early 2024, it will be a big achievement.   I am sure USA and other large countries will follow soon.

The draft has more or less maintained its basic principles and structure. 

The basic objective of the new law is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.  In addition, there is an larger emphasis on AI systems should be overseen by people, rather than by automation alone.  The principle of proportionate regulations, the risk categorization of AI systems and the level of regulations appropriate to the risk are the central theme of the proposed laws.  In addition, there was no generative AI or ChatGPT like products when the original draft was developed in 2021 and hence additional regulations are added to address this large language models / Generative AI models. The draft also plans to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.

Just to recall from my earlier Blog, the risks are categorized  in to Limited risk, high risk and unacceptable risk.

The draft Law clearly defines systems which are categorized as “Unacceptable risk” and proposed to ban them from commercial launch within EU community countries.  Some examples are given below.

  • Any AI system which can change or manipulate Cognitive behaviour of  humans , especially vulnerable groups such as children, elderly etc.
  • Any AI system which classifies people based on various personal traits such as behaviour, socio-economic stataus or race and other personal characteristics.
  • Any AI system which does real-time and remote biometric identification systems, such as facial recognition which is usually without consent of the person targeted.   The law also clarifies that past data analysis for law enforcement purposes is acceptable with court orders.

The draft law is concerned about any negative impact on fundamental rights of EU citizens and any impact on personal safety.  These types of systems will be categorized as High Risk.

1)  Many products such as toys, automobiles, aviation products, medical devices etc. are already under existing U Product safety legislation.  Any AI systems that are used inside products already  regulated under this legislation will also be subjected to additional regulations as per High Risk category.


2)  Other AI systems falling into eight specific areas that will be classified as High Risk and required registration in an EU database and subjected to the new regulations.

The eight areas are: –

  1. Biometric identification and categorisation of natural persons
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, worker management and access to self-employment
  5. Access to and enjoyment of essential private services and public services and benefits
  6. Law enforcement
  7. Migration, asylum and border control management
  8. Assistance in legal interpretation and application of the law.


Once these systems are registered in the EU database, they will be assessed by appropriate agencies for functionality, safety features, transparency, grievance mechanisms for appeal etc and will be given approvals before they are deployed in EU market.  All updates and new versions of these AI system will be subjected to similar scrutiny.  


Other AI systems not in the above two lists will be termed as “Limited risk” systems and subjected to self-regulations.  At the minimum, the law expects these systems to inform the users that they are indeed interacting with an AI system and provide options to change to a human operated system or discontinue using the system. 

As I have mentioned before, the proposed law is covering Generative AI systems also.  The law required these systems to disclose to the users that the output document or a output decision is generated or derived by a Generative AI system.  In addition, the system should publish the list of copyrighted training content used by the model.  I am not sure how practical this is given that ChatGPT like systems are reading every digital content in the web and now moving in to very audio / video content.  Even if the system produces this list which is expected to be very large, not sure current copy right laws are sufficient to address the use of this copyrighted material in a different form inside the deep learning neural networks. 

The proposed law also wants to ensure that the generative AI models are self-regulated enough not to generate illegal content or provide illegal advice to users.


 Indian Government is also looking at enacting AI regulations soon.  June 9th 2023 interview, Indian IT minister talked about this.  He emphasized the objective of “No harm” to citizen digital users.  Government’s approach to any regulation of AI will be thru the prism of “ User harm or derived user harm thru use of any AI technology”.  I am sure draft will be out soon and India also will have similar laws soon.

Let us discuss about what are the implications or consequences of this regulation among the various stakeholders.

  • AI system developer company ( Tech and Enterprises )


They need to educate all their AI development teams on these laws and ensure these systems are tested for compliance prior to commercial release.  Large enterprises may even ask large scale model developers like open.AI to indemnify them against any violations while using their APIs.  Internal legal counsels of both the tech companies and API user enterprises need to be trained on the new laws and get ready for contract negotiations.  Systems Integrators and outsourcers such as Tech Mahindra, TCS, Infosys etc. are also need to gear up for the challenge.  The liability will be passed down from the enterprise to the Systems Integrators and they need to ensure compliance is built in and also tested correctly with proper documentation.

  • Governments & Regulators

Government and regulatory bodies need to upskill their staff on the new laws and how to verify and test compliance for the commercial launch approval.  The tech companies are very big and throw in best technical as well as legal talent to justify their systems are compliant and if regulatory bodies are not skilled enough to verify then the law will become ineffective and will be only on paper.  This is a huge challenge for the government bodies. 

  • Legal community both public prosecutors, company legal counsels and defence lawyers

Are they ready for the avalanche of legal cases starting from regulatory approvals and appeals, ongoing copyright violations, privacy violations, inter company litigations of liability sharing between Tech, enterprise and Systems Integrators etc.

Massive upskillng and training is needed for even senior lawyers as issues arising from this law are very different.  The law degree curriculum needs to include a course on AI regulations. For example, the essence of a comedian talk show “learnt” by a deep learning model and stored deep in to neural networks.  Is it a copyright violation?   The model outputs similar style comedy speech by using the “essence” stored in neural network.  Is the output a copy right violation?  Who is responsible and accountable for an autonomous car accident?  Who is responsible for a factory accident, causing injury to a worker in a autonomous robot factory?  Lots of new legal challenges.

Most Indian Systems Integrators are investing large sums of money to reskill and also create new AI based service offerings.  Hope they are spending part of that investment in AI regulations and compliance. Otherwise, they run a risk of losing all the profits in few tricky legal challenges. 

More later

L Ravichandran