Sivakumar Selva Ganapathy, VP - Software Engineering, Johnson Controls
In an exclusive interview with CIOTechOutlook, Sivakumar Selva Ganapathy shared his valuable inputs on leading large scale digital engineering teams in the fast growing age of AI. He also talked about strategies to attract and up-skill talent in AI to support large scale projects. Sivakumar oversees engineering and innovation across multiple campuses in India, driving innovation through leadership, and a deep understanding of customer needs.
The rapid growth of AI has created a talent gap. What strategies can be implemented to bridge the skill gap, particularly in AI-driven areas like machine learning, data science, and AI engineering?
As AI is evolving rapidly, we need to constantly introspect, update our curriculum, and provide new learning plan for all individuals. There are two different aspects to bridging the skill gap and it can be categorized as the measures that can be taken inside the enterprise and those which can be implemented outside the enterprise. Inside the enterprise, a safe environment needs to be established where people should be encouraged to conduct experiments with the right set of tools provided to them. They must also be educated on being skilled on AI as it is no longer an option but a business imperative. They should be provided with the learning curriculum on AI online, and on subject matter expertise from within and outside the company. Moreover, training alone will not be sufficient; there is a need to experiment in fail-fast approach where real-time problem statement will be given along with tools and licenses, allowing them to experiment with various machine learning models which will help them become refined in their approach.
For the Indian ecosystem to bridge the AI talent gap, we need to collaborate with universities and train the young minds so that they become industry ready. Another important point is that the trifecta of Industry, Academia and Government must tailor the syllabus and provide additional learning opportunities to national skill framework. For example, NASSCOM foundation is trying to educate the workforce on embedded AI and provide learning opportunities. It is important to recognize and reward Generation Z as they are going to get trained on AI and also the subject matter experts who are willing to train themselves on AI.
Digital engineering teams often consist of cross-disciplinary experts. What tools and processes can help integrate the work of multidisciplinary experts and foster synergy across diverse teams?
In a large enterprise, where multiple teams such as Digital, IT, Functional and Business collaborate, maintaining a diverse cross-functional perspective is important for achieving desired result. To ensure effectiveness, the first step is establishing a common goal or OKR (Objective and Key Results). The major difference between goal and OKR is that OKR is flexible and in an agile environment, some targets may be achieved whereas some may not be achieved. So, the targets can be adjusted as you move forward. Another aspect is that different methodologies can be adopted such as Agile Scrum methodology, SAFe methodology and if it is completely task based then Kanban methodology can also be adopted. Another important aspect is providing tools like Jira, Confluence, GitHub, and Microsoft copilot across all functional teams. This will not only make them productive but will also help them be collaborative.
There is a significant talent shortage in AI and digital engineering fields. What strategies can be implemented to attract, retain, and up-skill talent in AI to support large-scale projects?
Regardless of the size of the company, be it a startup, small firm, large Global Capability Center (GCC) or IT Service Provider, every enterprise should start the AI adoption from top. A go forward strategy should be prioritized. AI enabled and AI first is being talked about at the enterprise level. Once the talent commits to investing time in AI, the reward and compensation team can evaluate and distinguish the specific skills and abilities of employees within the company, and then compare those skills to what is available in the broader market. A dedicated budget allocation is important for AI initiatives to attract talent from the market, which means our competition benchmarking has to be different than the rest of the norm. In the leadership role, as the hiring manager and decision maker, they are empowered to invest in AI and attract new talents.
Another major aspect after attracting talent is to keep them motivated. Recognition and rewarding device mechanism is needed to reward small failures as the technology is revolving rapidly, be it Llama model, ChatGPT model, large language model, small language model, or tiny language model and if work ambience is established where they can fail-fast and be motivated will attract and retain talent. Other recognition that can be given is a soft pat on the back or blink in financial reward that can definitely retain talent.
Scaling AI across large teams and projects while ensuring quality, security, and efficiency is complex. What are the best practices for managing large AI-driven projects with multiple stakeholders and complex timelines?
Some of the best practices that can be followed before adopting large AI projects are the foundational aspects that need to be addressed. Every company should have AI ethics principles on which they will develop the solutions. The infrastructure on which the models will run need to be secure. Privacy of the data and people has to be built-in by design. Managing the hallucination of misinformation about data is most important. Looking at security breach and the mechanism that we have in our data and analytics channel platform to ensure continuous monitoring of data. Once these fundamentals are fixed, then we can think bigger and adopt large AI projects.
As the stakeholders require quick returns, even if the AI projects are large scale, we need to think bite-sized and take smaller chance to iterate it to get the value and then incrementally expand the scope. Another important aspect is value articulation as to train an AI model and to get an inference from it and to run a model on operate and maintain mode, it requires lot of capital. The CFO will only agree on investing on any AI project as long as the benefits are realistic. Therefore in a large project, it is important to get into modular, smaller chunks, do a proof of concept within a secure environment, articulate the value, and then incrementally expand it.
Many organizations struggle to integrate AI into the existing legacy systems. How can digital engineering leaders facilitate seamless integration of AI into legacy infrastructure without disrupting ongoing operations?
As the world evolves, enterprises are increasingly embracing digital technology and we must acknowledge that the legacy system will be there along the new contemporary system. The challenge here is to make them interact and co-operate. With the legacy system in place and the new AI system being implemented, there are significant technology debts being accumulated that need to be addressed. Technology debt is the waste cost that needs to be minimized for which it is required to containerize the legacy system so that it can interface with other contemporary systems. The other point is to deploy contemporary micro services based architecture so that they can interface with legacy system through data driven API. The entire architecture need to be modular, as one huge monolithic architecture cannot interface seamlessly. Therefore, the complexity of the architecture needs to be broken down into modular micro services based on API driven architecture. The legacy system can then be continued and secured through API.
Once the data is extracted from the legacy system, it must be securely stored in a protected environment before being used for training and inference in the AI model. After these inferences are given, the value can be generated depending upon the function which we deploy. It can be customer service function, finance function, HR function, or sales operation to improve the margin of the top-line revenue. We have to embrace and acknowledge the fact that it has to commingle with the contemporary system and the legacy system. Legacy system may not be inferior as they lack the latest kind of architecture but in order to store data, it is very effective. For example, the BFSI mainframe is still in use but if the same thing is required to access the mobile application, it needs the API ground architecture. Therefore, it must be first acknowledged, embraced, adapted and then action can be taken so that it can seamlessly interface.