Agentic AI in APAC: Navigating the Path from Pilot to Production

Agentic AI in APAC: Navigating the Path from Pilot to Production

This blog post is a follow-up to what I shared at our recent meetup organized by AI Verify, where over 100 members of our community joined us at IMDA to share and learn real-world stories about making Agentic AI reliable.

The Asia-Pacific region is witnessing a significant shift in how organizations approach artificial intelligence, moving beyond traditional AI implementations toward more autonomous, agentic systems. Based on our recent pilot programs with enterprise customers across APAC, we’re seeing distinct patterns emerge in adoption strategies, use cases, and implementation challenges.

Current Adoption Patterns: What Our Pilot Data Reveals

Our customer pilot programs have provided valuable insights into how organizations are actually utilizing agentic AI capabilities. The data reveals interesting trends in feature adoption:

Tool Usage leads the way at 45% adoption, indicating that organizations are primarily leveraging AI agents’ ability to interact with existing software tools and APIs. This suggests a pragmatic approach where companies are extending their current technology stack rather than replacing it entirely.

Multi-Agent Systems follow at a notable 70% adoption rate, demonstrating strong interest in deploying multiple specialized agents that can collaborate on complex tasks. This high adoption rate indicates that APAC organizations recognize the value of distributed AI capabilities.

Reflection capabilities show 15% adoption, suggesting that while organizations value AI systems that can self-evaluate and improve their responses, this remains a more advanced feature that requires additional organizational maturity.

Action-oriented implementations currently represent 5% of adoption, indicating that while there’s interest in AI systems that can take direct actions, most organizations are still in the monitoring and recommendation phase. This low adoption rate reflects the preference for human-in-the-loop approaches, where AI agents recommend actions but require human approval before execution, ensuring oversight and control over critical business decisions.

Top 5 Use Cases Driving APAC Adoption

1. Software Development Lifecycle (SDLC)

Organizations are implementing agentic AI to automate code review processes, generate test cases, and assist in deployment pipelines. The ability of AI agents to understand context across multiple development phases makes them particularly valuable for streamlining software delivery.

2. Deep Research and Analysis

Companies are deploying AI agents to conduct comprehensive market research, competitive analysis, and regulatory compliance reviews. These agents can process vast amounts of unstructured data and synthesize findings across multiple sources and languages—particularly valuable in APAC’s diverse regulatory landscape. For example, financial institutions are using AI agents to research source of wealth documentation and process commercial loan company profiles, automatically gathering and analyzing corporate filings, news articles, and regulatory records to build comprehensive risk assessments.

3. Manufacturing Process Automation

Manufacturing companies are using agentic AI to optimize production schedules, predict maintenance needs, and coordinate supply chain activities. AI agents can adapt to changing production requirements and coordinate across multiple systems in real-time. A notable application is in new product design research, where AI agents analyze market trends, competitor products, regulatory requirements, and technical specifications to provide comprehensive insights that inform product development decisions and accelerate time-to-market.

4. Sales Insights and Customer Experience

Organizations are implementing AI agents to analyze customer interactions, predict purchase behavior, and personalize engagement strategies. These systems can process customer data across multiple touchpoints and provide actionable insights for sales teams.

5. Procurement Process Automation

Companies are streamlining procurement workflows using AI agents that can evaluate suppliers, negotiate contracts, and manage purchase orders. These agents can adapt to changing market conditions and organizational requirements while maintaining compliance standards.

Three Distinct Adopter Profiles

Our experience across APAC markets has revealed three primary adoption patterns:

Early Adopters: The “Agentic” Pioneers

These organizations are enthusiastic about becoming “agentic” and focus on automating existing workflows. They’re willing to experiment with newer technologies and often serve as proof-of-concept environments for more advanced AI capabilities. Early adopters typically have strong technical teams and leadership buy-in for AI initiatives.

Stack Builders: Long-term Strategic Planners

Stack Builders approach agentic AI with enterprise-wide adoption in mind. They start with simple, well-defined use cases while building the infrastructure and organizational capabilities needed for broader deployment. These organizations prioritize scalability and integration with existing enterprise systems.

Pragmatic Adopters: Embedded Solution Seekers

Pragmatic adopters prefer implementing agentic AI through embedded applications in platforms they already use, such as Salesforce or Microsoft 365. They focus on immediate business value and prefer solutions that require minimal change to existing processes and user behavior.

Key Implementation Challenges

Despite growing interest, organizations face several significant hurdles in scaling agentic AI implementations:

Business Readiness for Dynamic Workflows

Traditional business processes are designed for predictability and control. Agentic AI introduces dynamic decision-making that can feel unpredictable to stakeholders. Organizations struggle with the cultural shift required to trust AI agents with important business decisions, particularly in risk-averse cultures common across many APAC markets.

Quantification of Business Outcomes

Measuring the ROI of agentic AI implementations remains challenging. Unlike traditional automation projects with clear metrics, agentic systems often provide value through improved decision quality, faster response times, and enhanced adaptability—benefits that are difficult to quantify using conventional business metrics.

Access to Source Systems

Many organizations have data and systems scattered across multiple platforms, often with limited API access or integration capabilities. Agentic AI requires comprehensive data access to function effectively, but legacy systems and data silos create significant technical barriers to implementation.

Cost of Manual Data for Evaluation

Evaluating agentic AI performance requires significant manual effort to create test datasets, validate outputs, and assess decision quality. Organizations underestimate the ongoing cost of maintaining evaluation frameworks, particularly when AI agents are deployed across multiple use cases with different success criteria.

Looking Forward: The Path to Maturity

The APAC market’s approach to agentic AI reflects broader regional characteristics: thoughtful adoption, emphasis on practical business outcomes, and careful risk management. Organizations that succeed in scaling agentic AI implementations will likely be those that address the fundamental challenges of trust, measurement, and integration while building organizational capabilities for managing dynamic AI systems.

Two critical factors will determine scalability success: agent observability and cost optimization. Agent observability—the ability to monitor, debug, and understand AI agent decision-making processes in real-time—is essential for building organizational trust and ensuring reliable performance at scale. Without clear visibility into how agents make decisions, organizations struggle to troubleshoot issues, optimize performance, and maintain compliance standards.

Equally important is managing the cost of the solution, which becomes a key barrier to scale. While pilot programs may absorb higher per-token costs, enterprise-wide deployment requires sustainable economic models. Organizations need to factor in not just the direct costs of AI infrastructure, but also the ongoing expenses of monitoring, evaluation, human oversight, and system integration.

As the technology matures and more organizations share their implementation experiences, we expect to see standardized evaluation frameworks, improved integration capabilities, and greater organizational comfort with AI-driven decision-making. The current pilot phase is laying the groundwork for more widespread adoption across the region.

Data, Talent, and Trust are vital foundations for a data-driven enterprise.

Three key takeaways from my presentation at Garter Data & Analytics Summit 2022, Mumbai & Gartner keynotes.

I have engaged with hundreds of enterprises and have seen some amazing transformations, outcomes & innovations. Yet, while technology has been evolving rapidly, the foundational challenges remain the same.  If there are just three focus areas that can guarantee success, these would be 

Data – We have seen a need for real-time vs. batch, the complexity of integrating data sources almost triple, adapting to the change in the source system as a de-facto design. We have seen monumental failure with the data lake approach of a single repository. The Data Fabric architecture provides a practical approach that doesn’t require a significant transformation or large investment and focuses on the core of all integration challenges i.e., metadata consolidation. When Machine Learning is applied to this consolidated metadata, it provides a magical view of data relationships, usage patterns, data quality, and profile. Data fabric accelerates data consumption, provides data governance and protection mechanism adaptable to change in the eco-system, and facilitates data sharing.

2. Talent – How fast you and your team can “unlearn” is the most critical aspect of learning in recent times. Community-based learning in the enterprise is vital to keep pace with the changes and build a skill that can help you leverage data. Tools like AutoAI is a great starting point for learning ML/AI for someone new to the field.

3. Trust is built when we put the user in the pilot position and provide a cockpit that offers access to all the relevant information. TrustWorthyAI is an initiative toward ensuring we don’t get into a machine-human conflict, and the model makes a better decision by removing some of the inherited bias.

In terms of the Gartner keynotes themselves, the three key takeaway was 

  1. Gartner claimed that by 2030, synthetic data would completely overshadow real data in AI models. 
  2. De-emphasis on big data and finally acknowledge small data can equally contribute to success if appropriately harnessed.
  3. Governance was emphasised as a way of working rather than control; however, personally, I was a bit disappointed it came late in the framework.

5 key reasons to build Data Fabric with IBM on AWS  to drive data and analytics initiatives.

  1. Focus on empowering users with data.

Even after substantial investments, enterprises struggle with data sharing and serving users with the correct data. The primary reason for adopting Data Fabric is data sharing and self-service usage. The focus is on the simplicity of data consumption rather than storage and retention.

2. Expand the computing infra without creating additional silos

Users leverage insight based on trust in the data source. Data Lineage plays a vital role in building that trust. However, it is challenging to trace the lineage with data sources distributed across multiple systems, geography, data format, and processing tools. The IBM data fabric platform provides integration adapters and devices to ensure we support data traceability across traditional sources, new generation cloud services, and open-source tools.

3. Cloud Agnostic data fabric platform avoids lock-in.

IBM data fabric solution is based on IBM “Cloud pak for Data” Platform, which itself can be deployed on-premises and any other cloud vendor, providing a unified data plane and a common user experience. In addition, the data sources or services can be provisioned across AWS and on-premise without the need for additional training. 

4. Avoid Cloud Bill Shock for Data-Intensive Services.

AWS provides Sagemaker services to kick start your data science and ML initiative at a very minimal investment. Charges are based on actual compute consumption, an excellent way for experimentation. As ML becomes mainstream, the cost is best managed by combining PaaS and SaaS to ensure your data scientist has the freedom to experiment without worrying about bill shock. The same is true for intensive data processing, where an on-premise deployment can provide the lowest TCO. The Data Fabric platform binds the data pipeline irrespective of the location of the service.

5. Adopt hybrid and multi-cloud risk-free by adopting policy-based security.

Regulated organisations and the public sector has been accumulating petabytes of data with limited access for analytics. Security and privacy concerns limit data sharing. The traditional approach has been regulating access control at the source. While this is a prudent approach, its inhibits collaboration and data sharing and is tedious to manage. Centralised policy-based enforcement and data usage monitoring provided by the IBM Data fabric platform can ensure data protection across distributed sources.

Enterprises must have a data fabric layer before they go mainstream on cloud adoption to avoid further fragmented and silos ecosystems. Without a Data Fabric Platform at the early stage of the Hybrid Cloud Journey, the enterprises may end up with very costly initiatives later trying to bridge and manage the silos.

What I learned about continuous learning from my 100 days running streak.

Being a technology enthusiast, you may share my frustration of enrolling in dozens of courses on the MOOC platform, each partially completed with no clear sign of end date given the day-to-day job, family, and other commitments. Add to this those corporate mandatory education, and you may find yourself in a challenging position due to competing priorities. The result is usually, learning takes low preferences or becomes too monotonous.

I challenged myself to run a minimum of 5KM every day for 100 consecutive days. Initially, it seemed monotonous, but I gradually started loving it and completed it successfully, including ten days run in my drawing room during my stay-home order. I learned a few practical ways to keep going, break the monotony and have fun. I am now leveraging those routines in my learning experience with great results, sharing the same hope you may find helpful.

  1. Fix your learning time before the day schedule – While we all love the flexibility, however the key to streak successfully and keep going is a fixed schedule. I ran 80% of my run morning 7-8 AM that helped me keep going without bothering about the daily schedule. So now I have replaced 45 minutes of reading early morning, before my run. It doesn’t mean I am waking early, just replaced with the time spent on catching up with news & twitter 🙂 The schedule itself can be a personal preference but blocking your calendar is essential.

2. Mix it up: Easy interval, tempo, fartlek, speed run, progressive you see each run day is very different. The same goes for learning. Combine theory with some hands-on exercises. Read some blogs on the same topics or do some quizzes, ensure each session is fun.

3. Find a learning buddy: Well, you can’t read along unless you intend to join the “youth club” at Starbucks. But knowing someone with interest in similar subjects allows you to share, discuss the nuances and learn better.

4. Mentor someone – I started helping a few of my friends who recently started a passion for running. Mentoring them helped me to improve my running form and build endurance. In the same way, I have found that mentoring someone on a topic helps enhance and comprehend the subject better.

Explain how AI recommendations are being made to end users.

screenshot2019-02-01at9.21.44am

Explaining “how and why” behavior of any product & activity has always been very crucial. In the digital era, it has just become more prominence.
An example such as facebook inability to explain “how and why” of data sharing led to a big trust deficit with its users. “Explainability ” Revolution has started a while ago as evidence from the huge popularity of Jupyter/Zeppelin notebooks, data lineage in reporting, data governance project in enterprise & roles such as chief data officer.
The revolution is now pacing up as the adoption of Machine Learning and AI goes mainstream. With open source ML library and tons of code available online, an immature and a professional both can create a model that can be as critical as predicting your illness. How do we differentiate and trust this model and results?
Consider for example Healthcare recommendation engine on https://www.healthcare.com/.

screenshot2019-02-01at8.49.24am

By providing some basic inputs such as age, location it recommends healthcare plan personalized for you. As there is no explanation, top three recommendation coming for the same provider raises doubt and questions. Is the recommendation engine or company bias toward a specific provider? What were the criteria to recommend?

A black-box approach toward AI would be insensitive to the consumer and create lack of trust and will defeat the very purpose of leveraging AI to accelerate and improve customer experience.

“Explainability ” is the next big thing.
Visit https://www.ibm.com/cloud/ai-openscale to experience what it takes to provide explainability to your recommendation.

Machine Learning Remote Deployment.

Machine-LearningWith GDPR regulation acting as a catalyst, every government is addressing the export of personal data outside the country and on data collection intent & mechanism. This has suddenly changed the data architecture deployment landscape and specifically for large enterprises who were trying to consolidate data  ( data lake initiatives)   and create a Center of Excellence of data science team in order to gain strategic advantage in there journey toward a “data-driven organization”. The reality for most of the multi-national organization will be  a   hub and spoke data architecture.  With “Data Scientist” skills already being in high demand replicating workforce at each site is not feasible. Further, with multi-site data collection standardization on common processes and tools is operationally challenging. The problem is further magnified with multiple tools and multi-cloud deployment.

Let’s consider an example of one of the leading retail and commercial bank in the region. I had recently been engaged with them to provide consulting on a digital transformation project. They have centralised operations across multiple countries in the Asia Pacific, a great data science practice and the 3rd generation of data lake deployment.  Suddenly regulations have forced them to stop collecting data from local partners and remote subsidiary location. They have models that run on multiple framework and libraries such as SparkML, H2O, scikit-learn, tensorflow, anaconda, SPSS.  With these model embedded in day to day processes, there are challenges ahead in terms of replicating and managing these operations remotely at the same time leverage the scale they have built. So now a huge set of data is available at multiple sites owned by an entity they may or may not control & can’t fully trust due to a shared operation or competitive reason.

IBM Remote Machine Learning Deployment provides a solution to the above situation. The technology is embedded in our data science platforms such as  IBM Cloud Private for data and Watson Studio.  By providing multiple containers it provides a single deployment instance with all the top ML framework and popular open source library to build the model in a collaboration environment.  When it comes to deployment by pushing a virtualize environment (container + library) to a remote machine (both offline and online) it provides an optimized process and capabilities to score the data close to the data without copying or moving the data.  The vision is to provide a  instrumentation and mechanisms to ship the remote metrics to a  central repo so that Enterprise can monitor model centrally no matter where its deployed.  Here is a video demo that explains what we have in beta.

While this approach seems an extension of edge analytics, in this case, the edge may not be just IoT devices but it can be any machine at a subsidiary in another location. It is not just about real-time data or large distributed data single but the goal is to monetize your ML model and data science capabilities.

 

 

 

Building the Modern Information Fabric

Version 2

I am referring Information Fabric to systems delivering comprehensive capabilities to enable dynamic, real time data services.

Based on the technology trends, personal experiences and demands from our customers  I consider below four components as essential for building new Modern Information fabric.

1. Real time Insight built in system of engagements (hybrid transactional and analytical systems).
2. Serverless Applications. –Micro-services and serverless architecture would become the essential part of information fabric.  This would promotes quickly,  creating scalable data services modifying action sequences to meet the evolving demands of ever changing business requirements.

3. Scale by Design, Run Anywhere. –Business today can acquire petabytes of data in a short span and are dealing with the dilemma of security vs. elasticity of the cloud. While the industry responds the need of the hour is to build data services that can scale by design and run anywhere.
4. Predictive Analytics, Machine Learning, and AI. – While above three component helps to meet the data services “must have”,  this one is “differentiating “ factor. Machine Learning and AI capabilities would differentiate two business competing in the same space.

In this series, I would share my experience building each one of them starting with how to build an event driven enterprise that leverages HTAP for a real-time data services.

Part 1– Real-time Insights using hybrid transactional, analytical processing (HTAP) systems.

The need for real time insight both on structured, unstructured data and instant customer gratification is no brainer these days and had become essential to any businesses. If you need a quick reference, please visit.

Top industry use cases for real-time analytics

Over last decade, the approach used by an enterprise for real time Insight has evolved. Depending on the maturity, the execution differs. For the most enterprise, they kicked their journey through the process of creating an operational data store (transaction replication) and building a data warehouse.

traditional_data_services

This approach though helped to provide access to event(transaction) however any actionable insight still required substantial time lag due to the ETL/ELT processes, and that doesn’t provide a business impact with the “now” generation customers.  Some IT vendor tried to bridge the gap with tools like Change data capture, In-memory Analytics etc. . Technology savvy enterprises embarked on to a journey of optimising this with real time streaming analytics and later converging to a lambda architecture for real time and batch processing.

modern_data_services

While some enterprises today managed to improve or influence customer experience through the above mechanism, the success of the above projects and ROI has been challenging mainly due to the complexity it brings and skill set. Most of the successful enterprises in this endeavour, are the one who take an application centric or use-cases based approach.

From an enterprise perspective, the challenge with the above approach is that still they are different applications, and apart from the IT challenges (plumbing) and complexity of the system it doesn’t provide a mechanism to have a comprehensive enterprise data services.

Introducing the new IBM Project EventStore –

IBM EventStore is an platform to provide a next generation transactional and hybrid transactional, analytical processing (HTAP) systems. By “next generation”, I am referring to modern, new, operational systems that are built for tomorrow’s systems of engagement, as opposed to traditional systems of record.
The primary goal of the event store is to provide a platform, where real-time analytics and transaction processing techniques are woven together in the same application and optimise the execution of transactional processes (In-Process HTAP). It also simplifies and bridge the gap with traditional deployment by providing a platform for Point-of-decision HTAP whereby the transaction processing and analytic aspects is segregated into distinct, independently designed applications but without moving the data. Such pattern allows advanced analytics to be performed on “live” transactional data, something that is very hard to achieve in traditional architectures.

eventStore

Project EventStore is a unique platform, proactive effort from IBM to meet the next generation data-services requirement for a modern data fabric without added complexity. With benchmark proving million events/sec per node enterprise can cater to the new real-time insight at a low cost.

Project EventStore is exciting as this is based on the best of the open source technologies available today and thus enables wider adoption.

References :

IBM Project EventStore

The Lambda Architecture, simplified

 

Say hello to “Lisa” the most impressive customer care officer. None of AI can compete.

ai_

I recently called my bank and went through ten minutes of waiting on the phone, punching multiple keys, iterative menu and dialing again. I was stupid enough to identify in which category my request should be placed, and customer care AI system was efficient enough to disconnect a client not fully oriented with the bank voice menu. After few, attempt the AI-powered system gave on my intelligence and connected to Lisa.

What a relief, a truly advanced system that can understand emotions, answer my open-ended questions, have no issue with my lingo and finished my request faster and recommended a new product which I gladly accepted as it was fun to interact. Truly impressive customer care. Lisa is not the next generation of humanoid but human itself. Say hello to Lisa the most exceptional customer care officer.

In a genuinely democratic world where few vocals are re-writing the history and being neutral is a Sin as evident from recent US election ( http://brilliantmaps.com/did-not-vote/  ).  I want to ensure I am doing my job to set the right priority for myself and the fellow professionals, being myself an ML, AI, and big data evangelist. I see a lot of conferences where professionals and startup pride in replacing normal human interaction with a robot or AI-powered system. While this may sound cool, it certainly doesn’t make a business sense. Consider a BankA which replaces human interaction with the NLP-based engine to respond to customers may be saving millions of dollars.  This saving is diverted toward improving brand recognition, loyalty and reach out to potential buyers.  Now imagine another BankB which employs a mass, each employee brings new customers consistently & effortlessly due to their network and relationship. The enhanced customer experience becomes a “Brand” for itself, and the customer remains loyal irrespective of promotional offering from XYZ banks.  A Happy employee, happy clients, makes the world a better place to live.

There are problems that human race has been struggling since generations such as poverty, food crisis, natural disaster, drinking water availability, healthcare, education.  And we have new ones such as cyber security, abuse of social media, fraud and terrorism, efficient transportation in rural areas and a lot of “big Questions” that can be answered by Big data.  As a professional I prioritize and support projects such as

As IBM Watson Machine Learning, Microsoft Azure ML and Amazon ML aim to simplify ML and empower more professional it’s time to emphasize on the first and the most important phase, and that’s the phase where you ask the question and you specify what is it that you’re interested in learning from data. “what question we are asking “.

What’s Ahead in 2016 for Big Data & Analytics

As the year starts we will see plethora of reports on the trend for 2016. The reports are based on numerical analysis and lots of fact findings. However we all have our own gut feeling…here is what I see for Big Data and Analytics. This is primarily for Asia Pacific market and totally my personal opinion.
1.    Focus on Data Acquisition.
In past two years early adopters have invested in Big Data Platform, however still far away from real bite of “big data Analytics”. Data Acquisition would be key for them to fulfill that promise. This is more than just social data acquisition.  Am seeing demand for “Data Refinery” such as Location Based Data, Demographic Information, Spending habit etc. A great opportunity for startup. Not to forget weather data 
2.    Spotlight back on Data Integration tools.
Scripting is powerful, faster and fun.  Sounds “cool” when kicking off the big data initiative. However that fun doesn’t last longer and am seeing spotlight back on big data Integration tools.  Would safely predict that all those who invested in big data Platform would be spending on data integration tool this year.
3.    One of the Hadoop Vendor would get acquired by end of 2016.
This may sound coming from nowhere, but shouldn’t be surprising looking at the financial sheet for most of the pure Hadoop Vendor. In all probability am seeing one of the Hadoop vendor getting acquired by end of this year.
4.    Cloud becoming the primary deployment model for Analytics.
Business demand for agility (in action) and technology to cater “data privacy “would make Cloud as Primary model for Analytics Infrastructure.

F1, Spark and Blu-ray Player

What does F1, Spark and Blu-Ray player have in common?  Before you start browsing your “intellectual” thoughts, let me state the fact, it just happens to be few regular events in last week. I hosted a session on Apache Spark, attended the SIA F1 weekend event and picked up my free “Blu-Ray” Player (gift with my TV upgrade).
Back to work, and while reflecting on various advises and feedback on Big Data Analytic Deployments, it seems these dis-joint events actually represents today Analytics ecosystem and processes.
While F1 represent the ultimate Speed and Agility. Apache Spark promises to bring the same “Speed and Agility” to Big Data analytics.
F1 relies on discipline and rigor, and the key to winning is to adjust, adapt, and realign during the race (execution) itself. Big Data Analytics success factors are same. It’s not about starting with a KPI driven big-bang and rigid data governance approach. The key to “Big Data” Analytic is to start with a minimal investment, business aligned focused goal and adjusting, adapting and re-aligning during the development life-cycle itself.
Apache Spark promises great Agility in terms of Big Data Analytic development life-cycle. It provides ability to create complete data science workflow, ingest, transform, prepare data, execute analytic algorithm, analyze and visualize all on a single Platform. A unified Platform for such development allow to rapidly adjust, adapt and re-align and thus promises to provides Business with Insights and Agility they have been seeking.
What about the speed? While SPARK hold the record for quickly sorting 100 TB of data (1 trillion records) , its improving similar to Mercedes engine for F1 cars by each release.

image01

What about the “Blu-Ray” Player ?  While the “Blu-Ray” Player is one of the excellent technology,I have been struggling to understand it relevance in my house. I watch Movies on Apple Tv , its agile ( I can decide at any time what to watch, change my preference , pay and enjoy). I use USB-Drive/external Drive for any of my existing content. I don’t see a reason why I should be paying for costly “Blu-ray” Disc, which forces me to limit my choice and loose flexibility.

The last statement just reflects the comment I have been hearing from Business Leaders about the value they see from there “traditional data warehouse” approach.
Add to this, the Disc and Blu-ray Region code map, (data governance going wrong) which again limits what I can play, its excellent technology but today irrelevance to me.

So what’s represents yours Analytic Ecosystem?  “Speed and Agility” or “High Cost and Rigid” ?