Skip to main content

Beyond the script: The future of digital customer service

In the past companies noticed that their customers are getting frustrated by waiting for customer service agents for simple queries. “All our agents are busy. Your call is important to us. Please wait.” became a dreaded message for customers looking for answer to their simple queries. So many companies launched Chat Bots as part digitalizing their customer service.

Chat Bots could address only one set of customers – those who had very basic queries. Chatbots are programmed to respond to specific commands or questions with predefined responses. Can’t grasp complex questions or deviate from their script. Offers generic answers based on programmed responses – Relies on pre-programmed rules and keywords. Doesn’t improve over time and requires manual updates.

Then came AI Chat Bots with a promise to divert more customers away from human agents. They are expected to be smarter and more Flexible due to utilization of Natural Language Processing (NLP) to understand intent and context. Can handle a wider range of questions and respond in a more natural way. Adapts and improves through machine learning, offering more relevant responses over time. Can tailor responses based on user history and preferences. These Chat Bots were expected to reduce response times and improving the overall customer experience.

So, what is the customer behavior after this advancement in Chat Bot technology?  A recent survey commissioned by customer experience platform CallVu throws some interesting light on it.

Figure 1: Source: CallVu – AI in Customer Service Survey Mar 2024

A significant percentage of people, 81%, indicated readiness to wait, for varying duration, to talk with a live agent. 16% people were ready to wait for more than 10 min to talk to live agents! Only 14% seems to be ready to go straight away for interacting with Chat Bots.

Now combine the above survey findings with the findings below, in the same report.

Figure 2 Source: CallVu – AI in Customer Service Survey Mar 2024

As CallVu found out, people rated live agents much higher than AI Assistants on most dimensions. Slight rating advantages for AI assistants on speed, patience and accuracy.

The interesting part to note is customers prefer talking to live agents for venting out frustration, indicating a role beyond just problem resolution – exhibition of empathy. However, it is again clear that customers prefer interacting with Chat Bots for simple queries with accurate answers. The Chat Bot interaction also seems to give a feeling of ‘being patient’ to the customers.

Does this mean there is no road ahead for digitalization of Customer Services interactions using Chat Bots? Few other surveys do show data to the contrary. The sixth edition of Salesforce’s ‘State of Connected Customer’ report clearly brings out the fact that 61% of the customers still would prefer to engage a self-service to resolve an issue. But there is a warning from 68% people that if there is a bad experience then they will never use self-service again for that company. With these findings, Salesforce makes a case for an opportunity to further improve the experience through a more intelligent, autonomous agents powered by Generative AI.

If we look at what Salesforce promises through its ‘Einstein’ Autonomous AI Service Agent, it gives a peek into what to expect from such agents when other Independent Software Vendors start delivering similar products into the market.

Sophisticated reasoning and natural responses: Fluid, intelligent conversations; coupled with logical inferences and connecting various pieces of information from company’s various data sources.

24/7 swift resolutions driven by trusted data: Grounded in company’s trusted business data.

Built-in guardrails: Including protection of PII (Personally Identifiable Information).

Cross-channel and multimodal: Self-service portals, WhatsApp, Apple Messages for Business, Facebook Messenger, SMS and so on.

Seamless handoffs to human agents: Seamless handoff to Human Agent, if needed, with full context of conversation. For example, if something needs to be handled outside defined policy.

Only time will tell whether this will move the needle in the right direction for customers to start relying on digital means more and more to get their service requests resolved. In the near future, we might see a hybrid environment where all three types coexist. Traditional chatbots can handle simple tasks, while AI chatbots manage complex interactions. Autonomous AI chatbots can take on more advanced roles, working alongside humans.

AI Symphony in Airline Enterprises

Good old-fashioned AI (or what is now called Traditional AI) is deterministic in nature, while Generative AI is more probabilistic. Traditional AI relies on explicit rules, logic, and predefined algorithms. Given the same input and conditions, it will always produce the same output. This predictability ensures transparent behavior and decision-making processes.

Generative AI, on the other hand, generates new content after learning from data, expressing outcomes as probabilities. It adapts to different contexts and produces varied outputs even with the same input.

By combining Traditional AI for stability, interpretable decision-making, and well-defined rules in critical tasks, with Generative AI for creativity, adaptability, and handling complex, unstructured data, enterprises can create powerful systems that balance reliability and innovation.

This blog explores how this AI Symphony can be used in the context of an Airline Crew Scheduling System.

The Airline industry is a dynamic sector, where every aspect of its operations demands meticulous planning and execution. One of the core components of these operations is the task of the Crew Scheduling, which has to accommodate a wide range of variables and unforeseen events. The purpose of Crew Scheduling is to define where crew members will be on set dates and times. At the heart of crew scheduling is the Crew Roster. The diagram below brings out some of the key dimensions, if not all, that impact the activity of arriving at an optimal crew schedule. The diagram also outlines some of the key variables that are part of each of the impacting dimensions. For some of the terminologies involved, you can check out Ref 1.

 

 

The key dimensions that drive the schedule are:
Business related – Covers variables such as aircrafts, routes, schedules.
Crew related – Covers variables such as availability, roster bids, skills, training schedules, medical checks, license validity.
Disruptions – Covers events like technical failures, weather conditions, crew emergencies.
Legal requirements – Covers constraints like flight time, duty time, minimum rest period.
Labour union agreements – Covers constraints like agreed work hours, scheduling rules, pay and compensation, seniority considerations in roster bidding.
Crew Pairing – Covers requirements like flight pairing to be fulfilled by pairing crews. A flight pairing (also known as a trip or crew rotation) is a sequence of nonstop flights (flight legs) that starts and ends at the same airport. Once flight pairings are established, airlines can then assign crew members (crew pairing) specific tasks based on these designated flights.
Suppose an airline operates flights from New York (JFK) to London (LHR) and back. A pairing could be:
JFK-LHR (outbound leg)
LHR-JFK (return leg)
The crew assigned to this pairing would fly from JFK to LHR, rest, and then fly back to JFK.
Deadhead travels – A deadhead refers to a flight within a trip sequence where a crew member (such as a pilot or flight attendant) is not scheduled to work. Deadheads are necessary when trip continuity fails due to delays or cancellations and crew is required to reach a location to take over the shift. It’s a cost to the airline as a ticket revenue is lost.
Its planning has to accommodate available Deadhead routes, cost of travel on that route, conflict of crew rest period requirements with duration of travel on the route.

The key outcomes of scheduling exercise are:
Crew Roster – Shows assigned duties and responsibilities of each crew member. It ensures that the correct number of crew members are scheduled work at all times and to ensure that they are properly rested.
Crew communications and Notifications – Through SMS, WhatsApp or automated voice calls. The duty assignments, schedule changes, flight update etc. are conveyed to crew.

The future
Digitalization of Crew Scheduling and Roster Management has happened through IT Systems. Some levels of mathematical rules are incorporated in these systems to help the planners carry out the job. Combination of Traditional AI and Generative AI has potential to take this digitalization further to bring down the people intensity involved in creating these digital rosters making it more responsive to unknown events.

In general, Traditional AI is better for rule-based, deterministic tasks, while Generative AI excels in creative content generation and learning from data. Some of the areas covered below will provide appreciation of how these two AI types can work together. Though this is not an exhaustive list of possibilities.

  • Crew Assignment Optimization: Crew assignment optimization involves creating efficient schedules for crew members based on predefined rules, multiple variables  and constraints. Traditional AI can handle this well, as it relies on deterministic algorithms to find optimal assignments.
  • Rostering: Automating the rostering process where all the required impacting elements, except may be disruptions, are accommodated is handled well by using Traditional AI. If the airline wants to implement Dynamic Rostering, which takes care of the learning from historical data and adjusting schedules based on changing conditions (e.g., flight delays, crew last minute demands), then Generative AI is more suitable. Generative AI is able to propose options that were never thought of before to make the roster more dynamic.
  • Pairing Optimization: Pairing optimization involves creating efficient sequences of flights (pairings) for crew members. Such requirement is well suited for Traditional AI due to its deterministic behaviour.
  • Deadhead crew positioning: In the context of deadhead positioning, having clear rules and protocols is crucial for efficient repositioning. Deadhead positioning requires immediate decisions based on operational needs (e.g., flight delays, crew availability) with stability and predictability. Adherence to legal regulations (e.g., duty time limits, rest requirements) is critical during disruptions. No creativity is acceptable here. It’s a large-scale repositioning exercise that needs to be managed timely with scalability. Hence Traditional AI is suitable here.
  • Crew Communications and Notifications: Traditional AI can handle automated crew notifications (e.g., flight changes, duty reminders) based on predefined triggers. So suitable for routine communications. The interaction with the crews can be further enhanced by bringing in Generative AI Chatbots using natural language interactions for crew queries, assist with logistics, roster bids, crew swaps possibilities and so on. The interaction can become more context sensitive, for example in logistics assistance, based on crew’s current location, current weather condition there, current traffic situation there and so on. This also reduces administrative workload.

The above are the traditional scheduling reas. Some other possible areas where Generative AI can complement Traditional AI are as below.

  • Scenario Exploration and Contingency Planning: Generative AI can simulate alternative scheduling scenarios based on historical patterns. It can explore “what-if” situations, such as crew shortages, equipment failures, or unexpected events. By creatively generating various scenarios, it helps airlines prepare for contingencies.
  • Predictive Crew Sickness and Fatigue Management: Generative AI can analyze crew health data, historical sickness patterns, and fatigue indicators. It can predict potential crew shortages due to sickness or fatigue. Creativity lies in identifying early warning signs and suggesting preventive measures.

This use case brings out the fact that most of the operational ‘AIfication’ use cases in enterprises will be a hybrid approach involving Traditional AI and Generative AI.

 

References:

  1. Understanding Cabin Crew Roster https://cabincrewhq.com/cabin-crew-roster/

 

Note: The diagram was created using “Xmind” mind mapping tool from XMIND LTD.

GenAI Adoption – Challenges in Manufacturing Enterprise

While discussions have been ongoing regarding the use of fine-tuned Large Language Models (LLMs) for specific enterprise needs, the high cost associated with cloud-based LLMs, including subscription fees and API usage charges, is becoming increasingly evident. This cost barrier has been a major hurdle for many enterprises seeking to transition GenAI-powered solutions from pilot programs to production environments.

Conversely, Small Language Models (SLMs) appear to be a more suitable option for businesses seeking specialized applications due to their lower cost and resource requirements. Enterprises typically operate with constrained budgets. Piero Molino (Chief Scientific Officer & Cofounder of Predibase, creator of Ludwig, formerly of Uber’s AI Lab), predicts that SLMs will be a major driver of enterprise adoption in 2024 due to their attractive financial proposition.

But within the enterprises sector, Manufacturing Enterprises will likely to be one of the slowest adopters of GenAI in their operations. Especially Medium and Small manufacturers. Let us explore the reasons because the combination of Industry 4.0’s data collection and connectivity with GenAI’s analytical and generative capabilities has significant potential to transform manufacturing into a more autonomous, intelligent, and efficient system.

Business Hurdles

Cost

The high cost of Large Language Models (LLMs) is a major hurdle for their adoption in manufacturing. LLMs require massive computing power for training and inference, making the enterprises reliant on cloud providers. However, cloud provider fees can scale rapidly with model size and usage, and vendor lock-in can be a concern.

Small Language Models (SLMs) offer a potential solution. Their lower computational footprint makes on-premises deployment a possibility. However, implementing SLM requires expertise in machine learning and LLM training, which some enterprises may lack. Hiring additional staff or finding a vendor with this expertise is an option, but maintaining an SLM on-premises can be complex and requires significant IT infrastructure.

For many manufacturing enterprises, the complexity and cost of on-premises SLM maintenance might outweigh the benefits of reduced cloud costs. This could lead them back to cloud based SLMs, landing them where they started.

Security Concerns

Security concerns around data privacy are a major hurdle for manufacturing companies considering both external vendors and cloud adoption. Usually Medium and Small manufacturers have viewed cloud with apprehension.

Change Management

Implementing Generative AI (GenAI) solutions can necessitate significant modifications to existing manufacturing software and may require changes to current processes. While change management might be straightforward for greenfield projects (entirely new systems), most implementations will be brownfield projects (upgrades to existing systems). Manufacturers are understandably hesitant to disrupt well-functioning manufacturing processes unless there’s a compelling reason. Therefore, a robust business case and a well-defined plan for minimizing disruption during change management are crucial.

Technical Hurdles

Data Challenges

GenAI models require large amounts of clean, labelled data to train effectively. Manufacturing processes can be complex and generate data that is siloed, inconsistent, or proprietary. So, unless there are existing Observability solutions which has captured sensor telemetry over period, the manufacturer cannot directly introduce a GenAI solution. Additionally, companies may be hesitant to share this data with external vendors.

Integration Complexity

Integrating GenAI solutions with existing manufacturing systems can be complex and require expertise in both AI and manufacturing technologies. Vendors may need to have experience working with similar manufacturing systems to ensure a smooth integration. Existing vendors may have to be roped in for the integration, which would incur additional cost. Integration governance could become complex.

Lack of Standardization

The field of GenAI is still evolving, and there is a lack of standardization in tools and techniques. This can make it difficult for companies to evaluate and select the right vendor for their needs.

Accuracy

SLMs are likely less susceptible to hallucination and bias compared to LLMs. SLMs are trained on a smaller amount of data, typically focused on a specific domain or task. SLMs have a simpler architecture compared to LLMs. Hence, they are less prone to situations where the model invents information or connections that aren’t there.

Data Quality Still Matters. Even with a smaller dataset, bias can still be present if the training data itself is biased. Bias, in case manufacturing systems, is about plant-shift based bias, machine life bias, role importance bias, vendor bias etc. Bias can also start building up through feedback loop from new production output.

Less Established Tools and Expertise

There are fewer established tools and frameworks specifically designed for SLMs compared to LLMs. Finding experts with experience in implementing SLM-based GenAI solutions might be more challenging.

Conclusion

What you will notice is that though there is cost reduction by using SLM instead of LLM, the challenges and hurdles remain almost the same. The hesitation that existed in Manufacturers for LLM based solutions remains for SLM based solutions. In many cases preventing the manufacturers moving from pilot to production. That hesitation needs to be tackled on priority basis to unlock the potential of SLM for the future of smart manufacturing.

Generative AI in Product Genealogy Solution in Manufacturing

The demand for guaranteed product quality through comprehensive traceability is rapidly spreading beyond the pharmaceutical industry and into other manufacturing sectors. This rising demand stems from both increased customer awareness and stricter regulations. To address this need, manufacturers are turning to Product Traceability, also known as Product Genealogy, solutions.

Efforts over the past 4-5 years, even by Micro, Small and Medium Enterprises (MSMEs), to embrace digitalization and align with Industry 4.0 principles have paved the way for the deployment of hybrid Product Genealogy solutions. These solutions combine digital technology with human interventions. However, the emergence of readily available and deployable Generative AI models presents a promising opportunity to further eliminate human intervention, ultimately boosting manufacturing profitability.

To illustrate this potential, let’s consider the Long Steel Products Industry. This industry encompasses a diverse range of products, from reinforcement bars (rebars) used in civil construction with less stringent requirements, to specialized steel rods employed in demanding applications like automobiles and aviation.

The diagram below gives a high-level view of the manufacturing process stages.

Beyond core process automation done under Industry 3.0, steel manufacturers have embraced digitalization through Visualization Solutions. These solutions leverage existing sensors, supplemented by new ones and IIoT (Industrial IoT) technology, to transform data collection. They gather data from the production floor, send it to cloud hosted Visualization platforms, and process it into meaningful textual and graphical insights presented through dashboards. This empowers data-driven decision-making by providing valuable management insights, significantly improving efficiency, accuracy, and decision-making speed, ultimately benefiting the bottom line.

However, human involvement remains high in decision-making, defining actions, and implementing them on the production floor. This is where Generative AI, a disruptive technology, enters the scene.

Imagine a production process equipped with a pre-existing Visualization solution, constantly collecting data from diverse sensors throughout the production cycle. Let’s explore how Generative AI adds value in such a plant, specifically focusing on long steel products where each batch run (“campaign”) typically produces rods/bars with distinct chemical compositions (e.g., 8mm with one composition, 14mm with another).

Insights and Anomalies

  • Real-time data from diverse production sensors (scrap sorting, melting, rolling, cooling) feeds into a Time-Series database. This multi-modal telemetry data, like temperature, pressure, chemical composition, vibration, visual information etc., fuels a Visualization platform generating predefined dashboards and alerts. With training and continuous learning, Generative AI models analyse this data in real-time, identifying patterns and deviations not envisaged by predefined expectations. These AI-inferred insights, alongside predefined alerts, highlight potential issues like unexpected temperature spikes, unusual pressure fluctuations, or off-spec chemical composition.
  • If trained on historical and ongoing ‘action taken’ data, the AI model can generate partial or complete configurations (“recipes”) for uploading to PLCs (Programmable Logic Controllers). These recipes, tailored for specific campaigns based on desired results, adjust equipment settings like temperature, cooling water flow, and conveyor speed. The PLCs then transmit these configs to equipment controllers, optimizing production for each unique campaign.
  • Individual bars can be identified within a campaign using QR code stickers, engraved codes, or even software-generated IDs based on sensor data. This ID allows the AI to link process and chemical data (known as ‘Heat Chemistry’) to each specific bar. This information helps identify non-conforming products early, preventing them from reaching final stages. For example, non-conforming bars can be automatically separated at the cooling bed before reaching bundling stations.
  • Customers can access detailed information about the specific processes and materials used to create their steel products, including actual chemistry and physical quality data points. This transparency builds trust in the product’s quality and origin, differentiating your brand in the market.

Enriched Data Records

  • The AI model’s capabilities extend beyond mere interpretation of raw sensor data—it actively enriches it with additional information. This enrichment process encompasses:
    • Derived features: AI extracts meaningful variables from sensor data, such as calculating cooling rates from temperature readings or estimating carbon content from spectral analysis.
    • Contextualization: AI seamlessly links data points to specific production stages, equipment used, and even raw material batch information, providing a holistic view of the manufacturing process.
    • Anomaly flagging: AI vigilantly marks data points that deviate from expected values, making critical events easily identifiable and facilitating prompt corrective actions. This also helps in continuous learning by the AI model.
  • This enriched data forms a comprehensive digital history for each bar, providing invaluable insights that fuel process optimization and quality control initiatives.

While the aforementioned functionalities showcase Generative AI’s immediate impact on traceability, its potential extends far beyond. Trained and self-learning models pave the way for advancements like predictive maintenance, product simulation, waste forecasting, and even autonomous recipe management. However, these exciting future applications lie beyond the scope of this blog.

Despite its nascent stage in long steel product genealogy, Generative AI is already attracting significant attention from various companies and research initiatives. This growing interest underscores its immense potential to revolutionize the industry.

Challenges and Considerations

  • Data Quality and Availability: The success of AI-powered traceability hinges on accurate and complete data throughout the production process. Integrating AI with existing infrastructure and ensuring data consistency across systems pose significant challenges.
  • Privacy and Security Concerns: Sensitive data about materials, processes, and customers must be protected. Secure data storage, robust access control mechanisms, and compliance with relevant regulations are paramount.
  • Scalability and Cost-Effectiveness: Implementing AI-based solutions requires investment in hardware, software, and expert skills. Careful ROI analysis and planning are crucial to avoid budget overruns. Scaling these solutions to large facilities and complex supply chains requires thoughtful cost analysis and strategic planning.

By addressing these challenges and unlocking the power of Generative AI, manufacturers can establish robust and transparent product traceability systems. This, in turn, will lead to enhanced product quality, increased customer trust, and more sustainable practices.

Domain and LLM

I am in total agreement with Morgan Zimmerman, Dassault Systems quote in TOI today.  Every industry has its own terminologies, concepts, names, words i.e Industry Language. He says even a simple looking word like “Certification” have different meanings in Aerospace vs life sciences.  He recommends use of Industry specific language and your own company specific language for getting significant benefit out of LLMs. This will also reduce hallucinations and misunderstanding.

This is in line with @AiThoughts.Org thoughts on Domain and company specific information on top of general data used by all LLMs.  Like they say in Real Estate, the 3 most important things in any real estate buy decision is “Location, Location and Location”.  We need 3 things to make LLMs work for the enterprise.  “Domain, Domain and Domain”.   Many of us may recall a very successful Bill Clinton Presidential campaign slogan. “The economy, Stupid”.   We can say “The domain, Stupid” as the slogan to make LLMs useful for the enterprises.

But the million-dollar question is how much it is going to cost for the learning updates using your Domain and company data?  EY published a cost of $1.4 Billion which very few can afford.  We need much less expensive solutions for large scale implementation of LLMs.

Solicit your thoughts. #LLM #aiml #Aiethics #Aiforindustry

L Ravichandran

Generative AI in Plant Maintenance

Today almost all manufacturing verticals are highly competitive making it necessary to avoid any breakdown of the equipment in manufacturing process. This has made the past practice of Reactive Maintenance unacceptable. Aiming to eliminate breakdowns has other benefits like improved employee motivation, reduction in opportunity costs, and reduction in production cost.

There are broadly 6 maintenance types, and they are indicated in order of maturity: Ref.1

1.       Reactive Maintenance: When it breaks, you fix it. This is where most of the manufacturers start. It results in emergency maintenance which is unintentional and consists of repairing and replacing equipment on a “fire-fighting” basis. Production loss is usual result.

2.       Preventive Maintenance: You schedule replacements ahead of time before parts break, usually at a regular interval.

3.       Usage-Based Maintenance: You replace material when the machine has been used a certain amount before they break. You change oil in the equipment after say usage of 5000 hours. It doesn’t matter if it takes you one month or one year to hit five thousand hours, the oil only needs to be replaced once it has been used to its potential and further use could cause degradation of other parts.

4.       Condition-Based Maintenance: You replace the parts when they seem like they are getting too worn out to continue to function appropriately. Measurement of condition of the parts can be manual where very frequent inspections are carried out or it can be continuous by using sensors attached to the equipment. This results in more usage for the money spent.

5.       Predictive Maintenance: Utilize historical data to make predictions about when a part will break and when to replace the parts based on these predictions, prior to them breaking. This usually utilizes IIoT (Industrial IoT) and utilizes, but not always, artificial intelligence and machine learning. But it still depends on managers to take actions, like creating work order, assigning technicians etc.

6.       Prescriptive Maintenance: Advanced data analysis methods are used to do more than predict failure points, but instead provides hypothetical outcomes in order to choose the best action that can be taken – to avoid or delay failures – prior to the failure, safety hazards, and quality issues arise. It automatically creates work orders. It requires no intervention from managers and oversees equipment on its own. Generative AI (Gen AI) is helpful here.

A manufacturer implements a combination of approaches as above, based on cost-benefit analysis. The two approaches benefit significantly by utilizing AI techniques are Predictive and Prescriptive Maintenance.

The three core systems that are connected with each other are Asset Management, Maintenance Management and Inventory Management.

1.       Asset Management System (AMS): maintains map of assets deployed and their characteristics. It monitors the wear and tear, hence remaining life of assets or its parts. It sends Work Orders as triggers for maintenance requirements to Maintenance Management System.

2.       Maintenance Management System (MMS): It acts on the Work Orders from AMS to generate activity plan, inventory allocation / ordering, technician scheduling, calendar management that is required to get the job done.

3. Inventory Management System (IMS): Stores the current inventory with its parts characteristics and vendor details. On trigger from MMS, it either allocates available parts from existing inventory or gets the part through its ordering process.

The infusion of Gen AI in Plant Maintenance consists of below three key aspects:

1.       Continuous Condition Monitoring

2.       Predicting failures

3.       Executing repairs / replacements

Continuous Condition Monitoring is primarily implemented by deployment of various sensors enabled by IIoT and availability of plantwide WiFi connectivity for feeding the real time inputs as Time Series data to AMS. For example, sensors are deployed on all motors to pick rotation, speed, temperature data and send it continuously to AMS. In some cases, this could be even vision (image) data from cameras. For example, in case of monitoring depth ‘roller grooves’ used to roll steel bars from the steel billets in Steel Industry. AMS consumes all these inputs.

Predicting Failures is typically done by AMS. The data from the Continuous Monitoring is usually fed as Time Series data to AMS. Using the Machine Learning part of AI domain, the Real Time data received is analyzed using models trained by historical data of sensors, usually in correlation with different product manufacturing campaigns, to anticipate potential breakdowns in equipment or their parts. The parts’ technical data from product vendor is also used.

For example, from vendor provided data on motor, the life expectancy of motor in number of hours at certain load is known to AMS from the IMS where all details of the motor are stored. The hours of running of a motor and the load at which it was running can be found out from the readings of current drawn. The AMS may pick up all the readings of current drawn from the Time Series data it receives from sensors / meters and calculates the hours of running and average load during that period.

Similarly, for each equipment, there may be different sensor readings which can be used to calculate the used life. Applying the learning of the trained model, insights are generated to detect developing defects before they become major problems, determine the remaining usable life (RUL) of the assets. AMS then generates Work Order as request for the maintenance along with the constraints like outer time limit before the maintenance must be done.

Executing repairs  / replacements is done by MMS. Based on the maintenance request, the MMS deduces material required, skill required, and work-shift calendar carries out below activities:

  •           Receive Work Order to carry out the task and generate activity plan.
  •           Assign a technician, based on required skills, individual availability through the calendar and plant’s holiday schedule. Put that as task in the technician’s calendar.
  •           Book tools required for technician’s work.
  •           Put the request for the required material into IMS to get it allocated or purchased and then allocated.
  •           Update AMS, when job is done, with required details so that the monitoring can start again.

Opportunities

The amount of data available for training AI models is key to its ability to identify patterns and arriving at decisions. But obtaining large, labelled datasets can be challenging. Gen AI can be used to create new dataset matching the same underlying patterns as the original one. Such dataset can also be generated to bring in various conditions and failure scenarios that otherwise is not possible to capture with historical data alone. Availability of such large dataset ensures rigorous testing of prediction models, while mitigating bias in the model and enhancing quality of prediction.

With its ability to consume multi-modal inputs like sensor data, images and texts from manuals, camera inputs etc., it is possible to generate more comprehensive understanding of machine’s or part’s health fostering faster anomaly detection, better prediction, and accurate maintenance recommendation.

Gen AI can be used to create a Job Card and schedules for the repairs based not only on dimensions stated above, but also on analysing past performance of the technician for similar repairs, records of Mean Time Between Repairs etc.

So, Gen AI does not just predict a problem, but provides a solution. When a machine or its part shows signs of potential failure, Gen AI can look at a set of viable solutions and then generates a Work Order that ensures the most suitable fix.

No two similar machines or similar parts wear out similarly. Gen AI can generate different work orders and schedules based on real time data, corelating it with other influencing parameters. This ensures cost effectiveness in maintenance.

An interesting side effect of the ability of Gen AI to create a large set of synthetic data from a small set of actual data it its use as training tool. It can simulate a plethora of machinery failure scenarios to offer realistic training experience for technicians.

Challenges

The complexity involved in deploying Gen AI in Plant Maintenance required significant computing power. The natural choice becomes usage of Cloud based infrastructure. Hence safeguarding data privacy and security becomes paramount, as it involves sensitive equipment information and maintenance logs.

Conclusion

Gen AI brings in lot of improvement opportunities in Plant Maintenance through greater accuracy, efficiency, and reliability. The implementation exercise should take cognizance of challenges involved to make the adoption successful.

References:

1.       The different types of maintenance in manufacturing by Graham Immerman, MachineMetrics, 2020

AI Legislation: Need Urgency

Let me first wish you all a happy Navratri Festivities.  I still fondly remember the Durga Pooja days during my Indian Statistical Institute years.  However, we also need to remember we are in the midst of two wars, one in Ukraine and other in Middle East.  We wish solutions are found and further loss of life and destruction is stopped.

I came across two articles in Hindu Newspaper regarding our topic AI.  I have attached a scan of an editorial by M.K. Narayanan, well known National Security and Cyber expert. 

Few highlights are worth mentioning for all of us to ponder.

  1. There is a general agreement that latest advances in AI do pose a major threat and need to be regulated like nuclear power technologies.
  • All countries are not only “locking the gates after the horse has bolted””, but “discussing about locking the gates and deciding on the make & model of the Lock while the horse has bolted”.  Huge delays in enacting and implementing AI Legislation is flagged as a big issue.
  • Rogue nations who willfully decide not to enforce any regulations will get huge advantage over law abiding nations.
  • More than 50% of the large enterprises are sitting on “intangible” assets which are at huge risk of evaporating by non-state actors with AI powered cyber warfare.
  • Cognitive warfare using AI technologies, will destabilize governments, news media and alter the human cognition.
  • This is a new kind of war fare where state and technology companies must closely collaborate.
  • Another interesting mention of over dependence on AI and algorithms which may have caused the major intelligence failure in the latest middle east conflict.

All of these point to the same conclusion.  All countries and multi-lateral organizations such as UN, EU, African Union, G20 etc., multi-lateral military alliances like NATO etc. must move at lightning speed to understand and agree on measures to effectively control and use this great technology.  

The old classic advertisement slogan “JUST DO IT” must be the motto of all the organizations.

Similar efforts are needed by all large enterprises, large financial institutions, regulatory agencies to get ready for the scale implementation of these technologies.

Last but not the least, large technology companies need to look at this not just as a form of another innovation to help automation, but a human affecting , major disruption causing technology and spend sufficient resources in understanding and putting sufficient brakes to avoid run away type situations.

Cyber Security, Ethical auditors, risk management auditors will have huge opportunities and they have to start upskilling fast.

More later,

L Ravichandran.

AI and Law

The Public Domain is full of initiatives by many Law Universities, large law firms, and various government departments on the topic of “AI and Law “. I was happy to see a news article a few days ago about the Indian Consumer grievances cell thinking about using AI to clear a large number of pending cases. They have had some success in streamlining processes and making it all digital but they felt that the sheer large volume of pending cases needs AI-type intervention.  I have already talked about the huge volume of civil cases pending in lower courts in India and some cases taking even 20 years to get final judgment.  As the saying goes “Justice delayed is Justice denied”, it is imperative that we find solutions to this huge backlog problem.

All discussions are centred around two broad areas: –

1.      Legal Research and development of customer’s case by Law firms.  Basically, core work of both junior and senior law associates and partners.

2.      Assisting judges or even rendering judgment on their own by AI models to reduce backlog and speedy justice. 

Lots of interesting discussions happening on (1). Law research, looking into archives, similar judgments, precedence’s, etc. seem to be a no-brainer.  Huge advances in automation have been already done and this will increase multi-fold by these Law purpose-built language models.  What will happen to junior law associates is an interesting question. Can they use better research and develop actual arguments and superior case brief for their clients and take the load off senior associates who in turn can focus more on customer interactions?  I found discussions on the model analysing judges’ earlier judgments and making the argument briefs customized per judge, fascinating.  

The no (2) item needs lot of discussions.   All democratic countries jurisprudence is based on these 3 fundamental principles.

  1. Every citizen will have their “day in the court” to present their case to an impartial judge.
  2. Every citizen will have a right to a competent counsel with a provision of public defenders given free to the citizens.
  3. Every witness can be cross examined by the other party without any restrictions.

On the one hand, we have these great jurisprudence principles.  On the other hand, we have huge backlogs and delays. 

How much citizens are willing to give up some of the basic principles to get speedy justice? 

Can we give up the principle of “my day in Court” and let only written briefs submitted to the court to be used for final judgement? This will mean witness statements in briefs will not be cross examined or questioned.

Can we give up the presence of a human judge who will read the briefs on both sides and make a judgement and let an AI Model read both the briefs and pronounce the judgement?

Even if citizens are willing to give up these principles, does the existing law of the land allow this?   It may require changes to law and in some countries even changes to their constitution to allow for this new AI jurisprudence.

Do we differentiate between civil cases and criminal cases separately and find different solutions?  Criminal cases will involve human liberty issues such as imprisonment and will need a whole set of different benchmarks.

What about changes to appeal process if you do not like lower court judgment?   I presume we will need human judges to review the judgements given by AI Models. It is very difficult for us to accept higher court AI model, reviewing and correcting a lower court AI model’s original judgement.

The biggest hurdle is going to be us, the citizens.  In any legal case involving two parties, one party always and in many cases both parties will be unhappy with any judgement.  No losing party in any civil case is going to be happy that they lost as per some sub clause in some law text. In many cases, even winning parties may not be happy with the award amount.  In this kind of scenario, how do you expect citizens to accept an instantaneous verdict after both parties submit their briefs?  This will be a great human change management issue.

Even if we come out with some solutions to these complex legal and people problems, one technical challenge still remains a big hurdle.  With the release of many large language models and APIs, many projects are happening to train these LLMs on specific domain. A few days ago, we saw a press release by EY about their domain-specific model developed with an investment of US$1.4 Billion.  Bloomberg announced a BloombergGPT, their own 50-billion parameters language model purpose-built for finance. Who will bell the cat for the Law domain? Who will invest large sums of $$s and create a Legal AI Model for each country? Until this model is available for general use, many of the things we discussed will not be possible.

To conclude, there are huge opportunities to get business value out of the new AI technology in the Law and Justice Domain. However, technical, legal and people issues must be understood, addressed and resolved before any large-scale implementation.

More Later. Like to hear your thoughts.

L Ravichandran

AI Regulations : Need for urgency

Few weeks ago, I saw a news article about risks of unregulated AI.  The news article quoted that in USA, Police came to a house of a 8 months pregnant African American lady and arrested her due to a facial recognition system identified her as the theft suspect in a robbery. No amount of pleading from the lady about her advanced pregnancy condition during the time of robbery and she just could not have committed the said crime with this condition, was heard by the police officer.  The Police officer did not have any discretion.  The system set up was such that once the AI face recognition identifies the suspect, Police are required to arrest her, bring her to the police station and book her.  

In this case, she was taken to the police station, booked and released on bail. Few days later the case against her was dismissed as the AI system has wrongly identified her.  It was also found out that she was not the first case and few more people, especially African American women were wrongly arrested and released later due to incorrect facial recognition model.

The speed in which the governments are moving on regulations and proliferation of AI tech companies delivering business application such as this facial recognition model demand urgent regulations.

May be citizens themselves should organize and let the people responsible for deploying these systems accountable.  The Chief of Police, may be the Mayor of the town and County officials who signed off this AI facial recognition system, should be made accountable.  May be the County should pay hefty fines and just not a simple oops, sorry.

Lots of attention need to be placed on training data.  Training data should represent all the diverse people in the country in sufficient samples.  Expected biases due to lack of sufficient diversity in training data must be anticipated and the model tweaked.  Most democratic countries have criminal justice system with a unwritten motto “Let 1000 criminals go free but not a single innocent person should go to jail”.  The burden of proof of guilt is always on the state.  However, we seem to have forgotten this when deploying these law enforcement systems.  The burden of proof with very high confidence levels and explainable AI human understandable reasoning, must be the basic approval criteria for these systems to be deployed.

The proposed EU act classifies these law enforcement systems as high risk and will be under the act.  Hopefully the EU act becomes a law soon and avoid this unfortunate violation of civil liberty and human rights.

More Later,

L Ravichandran

EU AI Regulations Update

I have written some time back about EU AI Act draft circulation.  After more than 2 years, there is some more movement in making this a EU Law.  In June 2023,  the EU Parliament adapted the draft and a set of negotiating principles and the next step of discussions with member countries has started.  The EU officials are confident that this process will be completed by end of 2023 and this will become an EU law soon.  Like the old Hindi proverb “ Bhagawan Ghar mein Dher hain Andher Nahin”. Or “In God’s scheme of things, there may be delays but never darkness”.  EU has taken the first step and if this becomes a law by early 2024, it will be a big achievement.   I am sure USA and other large countries will follow soon.

The draft has more or less maintained its basic principles and structure. 

The basic objective of the new law is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.  In addition, there is an larger emphasis on AI systems should be overseen by people, rather than by automation alone.  The principle of proportionate regulations, the risk categorization of AI systems and the level of regulations appropriate to the risk are the central theme of the proposed laws.  In addition, there was no generative AI or ChatGPT like products when the original draft was developed in 2021 and hence additional regulations are added to address this large language models / Generative AI models. The draft also plans to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.

Just to recall from my earlier Blog, the risks are categorized  in to Limited risk, high risk and unacceptable risk.

The draft Law clearly defines systems which are categorized as “Unacceptable risk” and proposed to ban them from commercial launch within EU community countries.  Some examples are given below.

  • Any AI system which can change or manipulate Cognitive behaviour of  humans , especially vulnerable groups such as children, elderly etc.
  • Any AI system which classifies people based on various personal traits such as behaviour, socio-economic stataus or race and other personal characteristics.
  • Any AI system which does real-time and remote biometric identification systems, such as facial recognition which is usually without consent of the person targeted.   The law also clarifies that past data analysis for law enforcement purposes is acceptable with court orders.

The draft law is concerned about any negative impact on fundamental rights of EU citizens and any impact on personal safety.  These types of systems will be categorized as High Risk.

1)  Many products such as toys, automobiles, aviation products, medical devices etc. are already under existing U Product safety legislation.  Any AI systems that are used inside products already  regulated under this legislation will also be subjected to additional regulations as per High Risk category.


2)  Other AI systems falling into eight specific areas that will be classified as High Risk and required registration in an EU database and subjected to the new regulations.

The eight areas are: –

  1. Biometric identification and categorisation of natural persons
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, worker management and access to self-employment
  5. Access to and enjoyment of essential private services and public services and benefits
  6. Law enforcement
  7. Migration, asylum and border control management
  8. Assistance in legal interpretation and application of the law.


Once these systems are registered in the EU database, they will be assessed by appropriate agencies for functionality, safety features, transparency, grievance mechanisms for appeal etc and will be given approvals before they are deployed in EU market.  All updates and new versions of these AI system will be subjected to similar scrutiny.  


Other AI systems not in the above two lists will be termed as “Limited risk” systems and subjected to self-regulations.  At the minimum, the law expects these systems to inform the users that they are indeed interacting with an AI system and provide options to change to a human operated system or discontinue using the system. 

As I have mentioned before, the proposed law is covering Generative AI systems also.  The law required these systems to disclose to the users that the output document or a output decision is generated or derived by a Generative AI system.  In addition, the system should publish the list of copyrighted training content used by the model.  I am not sure how practical this is given that ChatGPT like systems are reading every digital content in the web and now moving in to very audio / video content.  Even if the system produces this list which is expected to be very large, not sure current copy right laws are sufficient to address the use of this copyrighted material in a different form inside the deep learning neural networks. 

The proposed law also wants to ensure that the generative AI models are self-regulated enough not to generate illegal content or provide illegal advice to users.


 Indian Government is also looking at enacting AI regulations soon.  June 9th 2023 interview, Indian IT minister talked about this.  He emphasized the objective of “No harm” to citizen digital users.  Government’s approach to any regulation of AI will be thru the prism of “ User harm or derived user harm thru use of any AI technology”.  I am sure draft will be out soon and India also will have similar laws soon.

Let us discuss about what are the implications or consequences of this regulation among the various stakeholders.

  • AI system developer company ( Tech and Enterprises )


They need to educate all their AI development teams on these laws and ensure these systems are tested for compliance prior to commercial release.  Large enterprises may even ask large scale model developers like open.AI to indemnify them against any violations while using their APIs.  Internal legal counsels of both the tech companies and API user enterprises need to be trained on the new laws and get ready for contract negotiations.  Systems Integrators and outsourcers such as Tech Mahindra, TCS, Infosys etc. are also need to gear up for the challenge.  The liability will be passed down from the enterprise to the Systems Integrators and they need to ensure compliance is built in and also tested correctly with proper documentation.

  • Governments & Regulators

Government and regulatory bodies need to upskill their staff on the new laws and how to verify and test compliance for the commercial launch approval.  The tech companies are very big and throw in best technical as well as legal talent to justify their systems are compliant and if regulatory bodies are not skilled enough to verify then the law will become ineffective and will be only on paper.  This is a huge challenge for the government bodies. 

  • Legal community both public prosecutors, company legal counsels and defence lawyers

Are they ready for the avalanche of legal cases starting from regulatory approvals and appeals, ongoing copyright violations, privacy violations, inter company litigations of liability sharing between Tech, enterprise and Systems Integrators etc.

Massive upskillng and training is needed for even senior lawyers as issues arising from this law are very different.  The law degree curriculum needs to include a course on AI regulations. For example, the essence of a comedian talk show “learnt” by a deep learning model and stored deep in to neural networks.  Is it a copyright violation?   The model outputs similar style comedy speech by using the “essence” stored in neural network.  Is the output a copy right violation?  Who is responsible and accountable for an autonomous car accident?  Who is responsible for a factory accident, causing injury to a worker in a autonomous robot factory?  Lots of new legal challenges.

Most Indian Systems Integrators are investing large sums of money to reskill and also create new AI based service offerings.  Hope they are spending part of that investment in AI regulations and compliance. Otherwise, they run a risk of losing all the profits in few tricky legal challenges. 

More later

L Ravichandran