Scaling AI: Finding the right Biztech strategies to succeed for organizations

Sujatha Gopal, CTO - Communications, Media & Information Services (CMI), Tata Consultancy services | Friday, 07 February 2025, 06:27 IST

  •  No Image

In an interaction with CIOTechOutlook, Sujatha Gopal, CTO, Communications, Media & Information Services (CMI), Tata Consultancy Services, shared her views and thoughts on what steps organizations should take to build and nurture the right AI talent and teams to ensure successful scaling and more. She is a seasoned Technologist driving Domain and Tech strategy, Enterprise Architecture Transformation , Strategic road mapping for tech enabled business transformation, Platform strategies and new technology incubation . Sujatha has over 25 years of experience across various industries including Banking & Financial services, Manufacturing, Retail, Communications & Media.

Scaling AI effectively requires aligning it with the core business objectives. What strategies should organizations use to align AI initiatives with business goals to ensure long-term success?

A value chain analysis is the first step towards aligning business goals with AI initiatives. Value chain analysis includes key strategic analysis and decision-making tasks. It is important to clearly define the business objectives, frame interactions and value exchange and then design the AI system work flows ,models interactions and business transactions. This also calls for a larger ecosystem collaboration which may include partners, customers, suppliers and more. AI often requires integration with core functional business processes. Enterprises need to move beyond existing metrics to measure the success of AI implementations; this calls for the development of right performance indicators to measure the impact of the AI technologies on their business. It is also important to define a clear & progressive roadmap to scaling AI.Organisations need to outline a business case and enumerate the Return on Investment, manage risks, secure required resources, setup the right architectures, governance, and implementation plan. The path to production for GenAI solutions is not easy and building an AI-mature enterprise is a progressive journey. While PoC architectures are relatively simple, production grade architectures require significant building blocks to be enabled and secured. Deploying GenAI solutions at scale also requires profound shifts in the roles of people, robust compliance and governance and ways of working, resulting from such deployments.  With Agentic AI picking up, the orchestration and governance of autonomously acting software entities require advanced tools and strict guardrails.

Organizations need to be clear on hotspots which are contextual use case placement scenarios in a business value chain that will yield the maximum benefit from use of AI technology. This will also provide view of the degree of impact, including the need to reimagine processes to be AI native, considering Human vs AI interplay at the core. AI native involves embedding reliable AI capabilities seamlessly within a system, processes, and all stages of the value chain, encompassing design, deployment, operation, and maintenance. Evaluation of hotspots includes identifying the business impact to drive tangible business outcomes, KPI impacts and a clear ROI path post implementation. Technical complexity in terms of the need for customizing models, data readiness, implementing, and integratingis also an important facet of evaluation. Establishing governance both at a strategic level and operational level is important. The governance function must also ensure that the reliability and safety measures are defined to protect from failures through well designed processes for remediations management, continuous monitoring, feedback, and evaluation. Setting up an AI first North Star Architecture will aid as a foundation for AI governance.

Scaling AI involves not only technology but also the right talent and organizational culture that supports continuous learning and adaptation. What steps should organizations take to build and nurture the right AI talent and teams to ensure successful scaling?

Organizations should:

  • Encourage learning and growth through training programs.
    • Experimentation & Learning: A culture that encourages experimentation, even if it leads to failures, is essential. AI development is iterative, and learning from mistakes is key.
    • Continuous Learning: Fostering a culture of continuous learning is vital. Employees should feel empowered to explore new technologies, attend workshops, and pursue certifications to stay ahead of the curve in the rapidly evolving AI landscape.
       
  • Be Data Driven.
  • Data Literacy: Cultivate a culture of data literacy across the organization. This ensures that everyone understands the importance of data quality, its role in AI, and how to use data effectively.
     
  • Cross-Functional Teams: Encourage collaboration between teams, data scientists, engineers, and business stakeholders.
  • Organization should promote success stories across units and foster a mindset of AI adoption.
     
  • Responsible AI:  Foster a culture of responsible AI development, emphasizing fairness, transparency, and accountability. Establish clear ethical guidelines for AI development and use within the organization.
  • Foster Culture of innovation through collaboration and communication.
  • Prioritize ethical AI development.

By focusing on these aspects, organizations can create a truly AI-ready culture that attracts, retain, and developAI talents.

As AI is integrated into business processes, what are the key challenges businesses face when managing the change associated with scaling AI, and how can they overcome them?

Few key challenges are Data Silos/Spaghetti pipeline, Infrastructure Limitations, Reskilling needs,Regulations, AI ethical concern etc.

Data Silos - One most common challenge today when scaling AI is data in an organization are in silos, Different systems store information separately. This fragmentation makes it difficult for AI to access the full set of data necessary to generate accurate vision.Integrating multiple data sources and establishing centralized data lake or data hub can be a way forward.

Infrastructure - Compute, storage, and network switching capabilities for the desired performance levels, reliability and other NFRs are core to production scaling and management. Cloud platforms can increase computing power and storage as necessary, making it easier to scale AI applications, but at the same time it would incur costs if usage not monitored. Cost and FinOpsIs very core to scaling and implementation of AI. Model usage costs, training and inference costs, computation and consumption costs are all critical to monitor at granular levels. A clear governance strategy is key to ensuring optimal cost consumption.

Reskilling - The workforce needs to be reskilled to adapt to new roles that require AI literacy and the ability to work alongside AI systems. This requires significant investment in training and development programs.

Regulation - Regulatory, privacy, security and inclusiveness standards and compliance are vital for every enterprise and are equally applicable, more relevant when latest AI technologies are deployed at scale for production.

AI Ethics - AI systems can inherit biases from the data they are trained on, leading to unfair or discriminatory outcomes. Hence it is very crucial to have quality data during training.SimilarlyExplainability of decisions being taken by AI can be used to address challenges, concerns raised about accountability and fairness.

Scaling AI requires robust and scalable tech infrastructure to support data processing. How important is infrastructure in scaling AI, and what technologies or frameworks do you recommend organizations focus on to build a scalable AI ecosystem?

A well-architected infrastructure is the foundation upon which AI systems are built, trained, and deployed. Investing in robust infrastructure is crucial for organizations looking to leverage the power of AI to gain a competitive advantage.At a most basic level, training an AI system involves combining the following assets: collecting and managing data pipeline for training and validation, providing the necessary technical infrastructure to carry out the training and keep track of the deployed AI systems and, finally, hosting and maintaining a set of AI models used in the systems. Computational power is paramount for training and deploying AI models. AI models, especially deep learning models, require immense computational power for training. This often involves processing massive datasets, which necessitates powerful GPUs, TPUs, and high-performance computing clusters.Even after training, deploying AI models for real-time applications involving near/real time data streaming and analytics demands significant processing power to handle incoming data and generate predictions quickly. Robust data storage and management are essential for handling large datasets. Data needs to be readily accessible to AI algorithms for training and inference. This requires robust data pipelines and efficient data retrieval mechanisms.Scalability and flexibility are crucial for adapting to evolving AI needs. High-speed and reliable network connectivity High-speed networks are crucial for fast data transfer and communication between different components of the AI infrastructure. 

Looking ahead, what emerging AI trends do you see as most critical for organizations to watch when scaling AI for future business success?

Agentic AI and a curated move towards Autonomous Systems is one of the major emerging trend. Move of Generative AI Beyond Text with Multi modalusage creating realistic synthetic content, powering new forms of marketing, entertainment, and even product design is another. Code Generation:While developers are already adopting this, feedback agents are being conceptualized to review the code autonomously, removing human-in-loopand also building a larger knowledge fabric. Agentic AI-Powered Automation: Automating more complex tasks and entire business processes such as Agentic AI to handle more sophisticated tasks, such as decision-making and dynamic workflow orchestration. AI for Cybersecurity is another rapidly evolving space through Proactive Threat Detectionby  AI algorithms which can analyze vast amounts of data to identify and predict cyber threats in real-time, such as phishing attacks, malware, and ransomware. Edge AI by Bringing AI to the Edge and  Deploying AI models/ SLM on edge devices (like smartphones, IoT devices) to enable real-time processing and reduce latency. Last but not least , AI for Sustainability is a fast emerging space such as Optimizing Energy Consumption: Using AI to optimize energy usage in buildings, factories, and transportation systems , Predictive Maintenance: Predicting equipment failures to minimize downtime and reduce waste and Developing Sustainable Solutions: AI can play a crucial role in addressing climate change, such as optimizing renewable energy sources and developing new materials.