Is your Enterprise AI Ready: Strategic considerations for the CXOs
Add Your Heading Text Here
At the enterprise level, AI assumes enormous power and potential , it can disrupt, innovate, enhance, and in many cases totally transform businesses . Multiple reports predicts a 300% increase in AI investment in 2020-2022 and estimates that the AI market amongst several exponential technologies will be the highest . There are solid instances that the AI investment can pay off—if CEO’s can adopt the right strategy. Organizations that deploy AI strategically enjoy advantages ranging from cost reductions and higher productivity to top-line benefits such as increasing revenue and profits, enhanced customer experiences, and working-capital optimization. Multiple surveys also shows that the companies winning at AI are also more likely to enjoy broader businesses.
So How to make your Enterprise AI Ready?
72 % of the organizations say they are getting significant impact from AI. But these enterprises have taken clear, practical steps to get the results they want. Here are five of their strategic orientation to embark on the process to make AI Enterprise Ready :
- Core AI A-team assimilation with diversified skill sets
- Evangelize AI amongst senior management
- Focus on process, not function
- Shift from system-of-record to system-of-intelligence apps, platforms
- Encourage innovation and transformation
Core AI A-team assimilation with diversified skill sets
Through 2022, organization using cognitive ergonomics and system design in new AI projects will achieve long term success four times more often than others
With massive investments in AI startups in 2021 alone, and the exponential efficiencies created by AI, this evolution will happen quicker than many business leaders are prepared for. If you aren’t sure where to start, don’t worry – you’re not alone. The good news is that you still have options:
- You can acquire, or invest in a company applying AI/ML in your market, and gain access to new product and AI/ML talent.
- You can seek to invest as a limited partner in a few early stage AI focused VC firms, gaining immediate access and exposure to vetted early stage innovation, a community of experts and market trends.
- You can set out to build an AI-focused division to optimize your internal processes using AI, and map out how AI can be integrated into your future products. But recruiting in the space is painful and you will need a strong vision and sense of purpose to attract and retain the best.
Process Based Focus Rather than Function Based
One critical element differentiates AI success from AI failure: strategy. AI cannot be implemented piecemeal. It must be part of the organization’s overall business plan, along with aligned resources, structures, and processes. How a company prepares its corporate culture for this transformation is vital to its long-term success. That includes preparing talent by having senior management that understands the benefits of AI; fostering the right skills, talent, and training; managing change; and creating an environment with processes that welcome innovation before, during, and after the transition.
The challenge of AI isn’t just the automation of processes—it’s about the up-front process design and governance you put in to manage the automated enterprise. The ability to trace the reasoning path AI use to make decisions is important. This visibility is crucial in banking & financial services, where auditors and regulators require firms to understand the source of a machine’s decision.
Evangelize AI amongst senior management
One of the biggest challenges to enterprise transformation is resistance to change. Surveys have found that senior management is the inertia led to AI implementation. C-suite executives may not have warmed up to it either. There is such a lack of understanding about the benefits which AI can bring that the C-suite or board members simply don’t want to invest in it, nor do they understand that failing to do so will adversely affect their top & bottom line and even cause them to go out of business. Regulatory uncertainty about AI, rough experiences with previous technological innovation, and a defensive posture to better protect shareholders, not stakeholders, may be contributing factors.
Pursuing AI without senior management support is difficult. Here the numbers again speak for themselves. The majority of leading AI companies (68%) strongly agree that their senior management understands the benefits AI offers. By contrast, only 7% of laggard firms agree with this view. Curiously, though, the leading group still cites the lack of senior management vision as one of the top two barriers to the adoption of AI.
The Dawn of System-of-Intelligence Apps & Platforms
Analysts report predicts that an Intelligence stack will gain rapid adoption in enterprises as IT departments shift from system-of-record to system-of-intelligence apps, platforms, and priorities. The future of enterprise software is being defined by increasingly intelligent applications today, and this will accelerate in the future.
By 2022, AI platform services will cannibalize revenues for 30% of market leading companies
It will be commonplace for enterprise apps to have machine learning algorithms that can provide predictive insights across a broad base of scenarios encompassing a company’s entire value chain. The potential exists for enterprise apps to change selling and buying behaviour, tailoring specific responses based on real-time data to optimize discounting, pricing, proposal and quoting decisions.
The Process of Supporting Innovation
Besides developing capabilities among employees, an organization’s culture and processes must also support new approaches and technologies. Innovation waves take a lot longer because of the human element. You can’t just put posters on the walls and say, ‘Hey, we have become an AI-enabled company, so let’s change the culture.’ The way it works is to identify and drive visible examples of adoption. Algorithmic trading, image recognition/tagging, and patient data processing are predicted to the top AI uses cases by 2025. It is forecasted that predictive maintenance and content distribution on social media will be the fourth and fifth highest revenue producing AI uses cases over the next eight years.
In the End, it’s about Transforming Enterprise
AI is part of a much bigger process of re-engineering enterprises. That is the major difference between the automation attempts of yesteryear and today’s AI: AI is completely integrated into the fabric of business, allowing private and public-sector organizations to transform themselves and society in profound ways. Enterprises that will deploy AI at full scale will reap tangible benefits at both strategic & operational levels.
Related Posts
AIQRATIONS
Reimagine Business Strategy & Operating Models with AI : The CXO’s Playbook
Add Your Heading Text Here
AlphaGo caused a stir by defeating 18-time world champion Lee Sedol in Go, a game thought to be impenetrable by AI for another 10 years. AlphaGo’s success is emblematic of a broader trend: An explosion of data and advances in algorithms have made technology smarter than ever before. Machines can now carry out tasks ranging from recommending movies to diagnosing cancer — independently of, and in many cases better than, humans. In addition to executing well-defined tasks, technology is starting to address broader, more ambiguous problems. It’s not implausible to imagine that one day a “strategist in a box” could autonomously develop and execute a business strategy. I have spoken to several CXOs and leaders who express such a vision — and they would like to embed AI in the business strategy and their operating models
Business Processes – Increasing productivity by reducing disruptions
AI algorithms are not natively “intelligent.” They learn inductively by analyzing data. Most leaders are investing in AI talent and have built robust information infrastructures, Airbus started to ramp up production of its new A350 aircraft, the company faced a multibillion-euro challenge. The plan was to increase the production rate of that aircraft faster than ever before. To do that, they needed to address issues like responding quickly to disruptions in the factory. Because they will happen. Airbus turned to AI , It combined data from past production programs, continuing input from the A350 program, fuzzy matching, and a self-learning algorithm to identify patterns in production problems.AI led to rectification of about 70% of the production disruptions for Airbus, by matching to solutions used previously — in near real time.
Just as it is enabling speed and efficiency at Airbus, AI capabilities are leading directly to new, better processes and results at other pioneering organizations. Other large companies, such as BP, Wells Fargo, and Ping , an Insurance, are already solving important business problems with AI. Many others, however, have yet to get started.
Integrated Strategy Machine – The Implementation Scope of AI @ scale
The integrated strategy machine is the AI analogy of what new factory designs were for electricity. In other words, the increasing intelligence of machines could be wasted unless businesses reshape the way they develop and execute their strategies. No matter how advanced technology is, it needs human partners to enhance competitive advantage. It must be embedded in what we call the integrated strategy machine. An integrated strategy machine is the collection of resources, both technological and human, that act in concert to develop and execute business strategies. It comprises a range of conceptual and analytical operations, including problem definition, signal processing, pattern recognition, abstraction and conceptualization, analysis, and prediction. One of its critical functions is reframing, which is repeatedly redefining the problem to enable deeper insights.
Amazon represents the state-of-the-art in deploying an integrated strategy machine. It has at least 21 AI systems, which include several supply chain optimization systems, an inventory forecasting system, a sales forecasting system, a profit optimization system, a recommendation engine, and many others. These systems are closely intertwined with each other and with human strategists to create an integrated, well-oiled machine. If the sales forecasting system detects that the popularity of an item is increasing, it starts a cascade of changes throughout the system: The inventory forecast is updated, causing the supply chain system to optimize inventory across its warehouses; the recommendation engine pushes the item more, causing sales forecasts to increase; the profit optimization system adjusts pricing, again updating the sales forecast.
Manufacturing Operations – An AI assistant on the floor
CXOs at industrial companies expect the largest effect in operations and manufacturing. BP plc, for example, augments human skills with AI in order to improve operations in the field. They have something called the BP well advisor that takes all of the data that’s coming off of the drilling systems and creates advice for the engineers to adjust their drilling parameters to remain in the optimum zone and alerts them to potential operational upsets and risks down the road. They are also trying to automate root-cause failure analysis to where the system trains itself over time and it has the intelligence to rapidly assess and move from description to prediction to prescription.
Customer-facing activities – Near real time scoring
Ping An Insurance Co. of China Ltd., the second-largest insurer in China, with a market capitalization of $120 billion, is improving customer service across its insurance and financial services portfolio with AI. For example, it now offers an online loan in three minutes, thanks in part to a customer scoring tool that uses an internally developed AI-based face-recognition capability that is more accurate than humans. The tool has verified more than 300 million faces in various uses and now complements Ping An’s cognitive AI capabilities including voice and imaging recognition.
AI for Different Operational Strategy Models
To make the most of this technology implementation in various business operations in your enterprise, consider the three main ways that businesses can or will use AI:
- Insights enabled intelligence
Now widely available, improves what people and organizations are already doing. For example, Google’s Gmail sorts incoming email into “Primary,” “Social,” and “Promotion” default tabs. The algorithm, trained with data from millions of other users’ emails, makes people more efficient without changing the way they use email or altering the value it provides. Assisted intelligence tends to involve clearly defined, rules-based, repeatable tasks.
Insights based intelligence apps often involve computer models of complex realities that allow businesses to test decisions with less risk. For example, one auto manufacturer has developed a simulation of consumer behaviour, incorporating data about the types of trips people make, the ways those affect supply and demand for motor vehicles, and the variations in those patterns for different city topologies, marketing approaches, and vehicle price ranges. The model spells out more than 200,000 variations for the automaker to consider and simulates the potential success of any tested variation, thus assisting in the design of car launches. As the automaker introduces new cars and the simulator incorporates the data on outcomes from each launch, the model’s predictions will become ever more accurate.
2. Recommendation based Intelligence
Recommendation based Intelligence, emerging today, enables organizations and people to do things they couldn’t otherwise do. Unlike insights enabled intelligence, it fundamentally alters the nature of the task, and business models change accordingly.
Netflix uses machine learning algorithms to do something media has never done before: suggest choices customers would probably not have found themselves, based not just on the customer’s patterns of behaviour, but on those of the audience at large. A Netflix user, unlike a cable TV pay-per-view customer, can easily switch from one premium video to another without penalty, after just a few minutes. This gives consumers more control over their time. They use it to choose videos more tailored to the way they feel at any given moment. Every time that happens, the system records that observation and adjusts its recommendation list — and it enables Netflix to tailor its next round of videos to user preferences more accurately. This leads to reduced costs and higher profits per movie, and a more enthusiastic audience, which then enables more investments in personalization (and AI).
3. Decision enabled Intelligence
Being developed for the future, Decision enabled intelligence creates and deploys machines that act on their own. Very few intelligence systems — systems that make decisions without direct human involvement or oversight — are in widespread use today. Early examples include automated trading in the stock market (about 75 percent of Nasdaq trading is conducted autonomously) and facial recognition. In some circumstances, algorithms are better than people at identifying other people. Other early examples include robots that dispose of bombs, gather deep-sea data, maintain space stations, and perform other tasks inherently unsafe for people.
As you contemplate the deployment of artificial intelligence at scale , articulate what mix of the three approaches works best for you.
a) Are you primarily interested in upgrading your existing processes, reducing costs, and improving productivity? If so, then start with insights enabled intelligence with a clear AI strategy roadmap
b) Do you seek to build your business around something new — responsive and self-driven products, or services and experiences that incorporate AI? Then pursue an decision enabled intelligence approach, probably with more complex AI applications and robust infrastructure
c) Are you developing a genuinely new platform ? In that case, think of building first principles of AI led strategy across the functionalities and processes of the platform .
CXO’s need to create their own AI strategy playbook to reimagine their business strategies and operating models and derive accentuated business performance.
Related Posts
AIQRATIONS
“RE-ENGINEERING” BUSINESSES – THINK “AI” led STRATEGY
Add Your Heading Text Here
AI adoption across industries is galloping at a rapid pace and resulting benefits are increasing by the day, some businesses are challenged by the complexity and confusion that AI can generate. Enterprises can get stuck trying to analyse all that’s possible and all that they could do through Ai, when they should be taking that next step of recognizing what’s important and what they should be doing — for their customers, stakeholders, and employees. Discovering real business opportunities and achieving desired outcomes can be elusive. To overcome this, enterprises should pursue a constant attempt to re-engineer their AI strategy to generate insights & intelligence that leads to real outcomes
Re-engineering Data Architecture & Infrastructure
To successfully derive value from data immediately, there is a need for faster data analysis than is currently available using traditional data management technology. With the explosion of digital analytics, social media, and the “Internet of things” (IoT) there is an opportunity to radically re-engineer data architecture to provide organizations with a tiered approach to data collection, with real-time and historical data analyses. Infrastructure-as-a-service for AI is the combination of components that enables architecture that delivers the right business outcomes. Developing this architecture involves aspects of design of the cluster computing power, networking, and innovations in software that enable advanced technology services and interconnectivity. Infrastructure is the foundation for optimal processing and storage of data and is an important which is also the foundation for any data farm.
The new era of AI led infrastructure is virtualized (analytics) environments also can be referred to as the next Big “V” of big data. The virtualization infrastructure approach has several advantages, such as scalability, ease of maintenance, elasticity, cost savings due better utilization of resources, and the abstraction of the external layer from the internal implementation (back-end) of a service or resource. Containers are the trending technology making headlines recently, which is an approach to virtualization and cloud-enabled data centres. Fortune 500 companies have begun to “containerize” their servers, data centre and cloud applications with Docker. Containerization excludes all of the problems of virtualization by eliminating hypervisor and its VMs. Each application is deployed in its own container, which runs on the “bare metal” of the server plus a single, shared instance of the operating system.
AI led Business Process Re-Engineering
The BPR methodologies of the past have significantly contributed to the development of today’s enterprises. However, today’s business landscape has become increasingly complex and fast-paced. The regulatory environment is also constantly changing. Consumers have become more sophisticated and have easy access to information, on-the-go. Staying competitive in the present business environment requires organizations to go beyond process efficiencies, incremental improvements and enhancing transactional flow. Now, organizations need to have a comprehensive understanding of its business model through an objective and realistic grasp of its business processes. This entails having organization-wide insights that show the interdependence of various internal functions while taking into consideration regulatory requirements and shifting consumer tastes.
Data is the basis on which fact-based analysis is performed to obtain objective insights of the organization. In order to obtain organization-wide insights, management needs to employ AI capabilities on data that resides both inside and outside its organization. However, an organization’s AI capabilities are primarily dependent on the type, amount and quality of data it possesses.
The integration of an organization’s three key dimensions of people, process and technology is also critical during process design. The people are the individuals responsible and accountable for the organization’s processes. The process is the chain of activities required to keep the organization running. The technology is the suite of tools that support, monitor and ensure consistency in the application of the process. The integration of all these, through the support of a clear governance structure, is critical in sustaining a fact-based driven organizational culture and the effective capture, movement and analysis of data. Designing processes would then be most effective if it is based on data-driven insights and when AI capabilities are embedded into the re-engineered processes. Data-driven insights are essential in gaining a concrete understanding of the current business environment and utilizing these insights is critical in designing business processes that are flexible, agile and dynamic.
Re-engineering Customer Experience (CX) – The new paradigm
It’s always of great interest to me to see new trends emerge in our space. One such trend gaining momentum is enterprise looking at solving customer needs & expectations with what I’d describe as re-engineering customer experience . Just like everything else in our industry, changes in consumer behaviour caused by mobile and social trends are disrupting the CX space. Just a few years ago, web analytics solutions gave brands the best view into performance of their digital business and user behaviours. Fast-forward to today, and this is often not the case. With the growth in volume and importance of new devices, digital channels and touch points, CX solutions are now just one of the many digital data silos that brands need to deal with and integrate into the full digital picture. While some vendors may now offer ways for their solutions to run in different channels and on a range of devices, these capabilities are often still a work in progress. Many enterprises today find their CX solution as another critical set of insights that must be downloaded daily into a omni-channel AI data store and then run visualization to provide cross-channel business reporting.
Re-shaping Talent Acquisition and Engagement with AI
AI s is causing disruption in virtually every function but talent acquisition t is one of the more recent to get a business refresh. A new data driven approach to talent management is reshaping the way organizations find and hire staff, while the power of talent analytics is also changing how HR tackles employee retention and engagement. The implications for anyone hoping to land a job, and for businesses that have traditionally relied on personal relationships are extreme, but robots and algorithms will not yet completely replace human interaction.AI will certainly help to identify talent in specific searches. rather than relying on a rigorous interview process and resume, employers are able to “mine” through deep reserves of information, including from your online footprint. The real value will be in identifying personality types, abilities, and other strengths to help create well-rounded teams. Also, companies are also using people analytics to understand the stress levels of their employees to ensure long-term productiveness and wellness.
The Final Word
Based on my experiences with clients across enterprises , GCCs ,start-ups ; alignment among the three key dimensions of talent, process and AI led technology within a robust governance structure are critical to effectively utilize AI and remain competitive in the current business environment. AI is able to open doors to growth & scalability through insights & intelligence resulting in the identification of industry white spaces. It enhances operational efficiency through process improvements based on relevant and fact-based data. It is able to enrich human capital through workforce analysis resulting in more effective human capital management. It is able to mitigate risks by identifying areas of regulatory and company policy non-compliance before actual damage is done. AI led re-engineering approach unleashes the potential of an organization by putting the facts and the reality into the hands of the decision makers.
(AIQRATE, A bespoke global AI advisory and consulting firm. A first in its genre, AIQRATE provides strategic AI advisory services and consulting offerings across multiple business segments to enable clients on their AI powered transformation & innovation journey and accentuate their decision making and business performance.
AIQRATE works closely with Boards, CXOs and Senior leaders advising them on navigating their Analytics to AI journey with the art of possible or making them jumpstart to AI rhythm with AI@scale approach followed by consulting them on embedding AI as core to business strategy within business functions and augmenting the decision-making process with AI. We have proven bespoke AI advisory services to enable CXO’s and Senior Leaders to curate & design building blocks of AI strategy, embed AI@scale interventions and create AI powered organizations.
AIQRATE’s path breaking 50+ AI consulting frameworks, assessments, primers, toolkits and playbooks enable Indian & global enterprises, GCCs, Startups, SMBs, VC/PE firms, and Academic Institutions enhance business performance and accelerate decision making.
AIQRATE also consults with Consulting firms , Technology service providers , Pure play AI firms , Technology behemoths & Platform enterprises on curating differentiated & bespoke AI capabilities & offerings , market development scenarios & GTM approaches
Visit www.aiqrate.ai to experience our AI advisory services & consulting offerings)
Related Posts
AIQRATIONS
Data Driven Enterprise – Part II: Building an operative data ecosystems strategy
Add Your Heading Text Here
Ecosystems—interconnected sets of services in a single integrated experience—have emerged across a range of industries, from financial services to retail to healthcare. Ecosystems are not limited to a single sector; indeed, many transcend multiple sectors. For traditional incumbents, ecosystems can provide a golden opportunity to increase their influence and fend off potential disruption by faster-moving digital attackers. For example, banks are at risk of losing half of their margins to fintechs, but they have the opportunity to increase margins by a similar amount by orchestrating an ecosystem.
In my experience, many ecosystems focus on the provision of data: exchange, availability, and analysis. Incumbents seeking to excel in these areas must develop the proper data strategy, business model, and architecture.
What is a data ecosystem?
Simply put, a data ecosystem is a platform that combines data from numerous providers and builds value through the usage of processed data. A successful ecosystem balances two priorities:
Building economies of scale by attracting participants through lower barriers to entry. In addition, the ecosystem must generate clear customer benefits and dependencies beyond the core product to establish high exit barriers over the long term.Cultivating a collaboration network that motivates a large number of parties with similar interests (such as app developers) to join forces and pursue similar objectives. One of the key benefits of the ecosystem comes from the participation of multiple categories of players (such as app developers and app users).
What are the data-ecosystem archetypes?
As data ecosystems have evolved, five archetypes have emerged. They vary based on the model for data aggregation, the types of services offered, and the engagement methods of other participants in the ecosystem.
- Data utilities. By aggregating data sets, data utilities provide value-adding tools and services to other businesses. The category includes credit bureaus, consumer-insights firms, and insurance-claim platforms.
- Operations optimization and efficiency centers of excellence. This archetype vertically integrates data within the business and the wider value chain to achieve operational efficiencies. An example is an ecosystem that integrates data from entities across a supply chain to offer greater transparency and management capabilities.
- End-to-end cross-sectorial platforms. By integrating multiple partner activities and data, this archetype provides an end-to-end service to the customers or business through a single platform. Car reselling, testing platforms, and partnership networks with a shared loyalty program exemplify this archetype.
- Marketplace platforms. These platforms offer products and services as a conduit between suppliers and consumers or businesses. Amazon and Alibaba are leading examples.
- B2B infrastructure (platform as a business). This archetype builds a core infrastructure and tech platform on which other companies establish their ecosystem business. Examples of such businesses are data-management platforms and payment-infrastructure providers.
The ingredients for a successful data ecosystem : Data ecosystems have the potential to generate significant value. However, the entry barriers to establishing an ecosystem are typically high, so companies must understand the landscape and potential obstacles. Typically, the hardest pieces to figure out are finding the best business model to generate revenues for the orchestrator and ensuring participation.
If the market already has a large, established player, companies may find it difficult to stake out a position. To choose the right partners, executives need to pinpoint the value they can offer and then select collaborators who complement and support their strategic ambitions. Similarly, companies should look to create a unique value proposition and excellent customer experience to attract both end customers and other collaborators. Working with third parties often requires additional resources, such as negotiating teams supported by legal specialists to negotiate and structure the collaboration with potential partners. Ideally, partnerships should be mutually beneficial arrangements between the ecosystem leader and other participants.
As companies look to enable data pooling and the benefits it can generate, they must be aware of laws regarding competition. Companies that agree to share access to data, technology, and collection methods restrict access for other companies, which could raise anti-competition concerns. Executives must also ensure that they address privacy concerns, which can differ by geography.
Other capabilities and resources are needed to create and build an ecosystem. For example, to find and recruit specialists and tech talent, organizations must create career opportunities and a welcoming environment. Significant investments will also be needed to cover the costs of data-migration projects and ecosystem maintenance.
Ensuring ecosystem participants have access to data
Once a company selects its data-ecosystem archetype, executives should then focus on setting up the right infrastructure to supports its operation. An ecosystem can’t deliver on its promise to participants without ensuring access to data, and that critical element relies on the design of the data architecture. We have identified five questions that incumbents must resolve when setting up their data ecosystem.
How do we exchange data among partners in the ecosystem?
Industry experience shows that standard data-exchange mechanisms among partners, such as cookie handshakes, for example, can be effective. The data exchange typically follows three steps: establishing a secure connection, exchanging data through browsers and clients, and storing results centrally when necessary.
How do we manage identity and access?
Companies can pursue two strategies to select and implement an identity-management system. The more common approach is to centralize identity management through solutions such as Okta, OpenID, or Ping. An emerging approach is to decentralize and federate identity management—for example, by using blockchain ledger mechanisms.
How can we define data domains and storage?
Traditionally, an ecosystem orchestrator would centralize data within each domain. More recent trends in data-asset management favor an open data-mesh architecture . Data mesh challenges conventional centralization of data ownership within one party by using existing definitions and domain assets within each party based on each use case or product. Certain use cases may still require centralized domain definitions with central storage. In addition, global data-governance standards must be defined to ensure interoperability of data assets.
How do we manage access to non-local data assets, and how can we possibly consolidate?
Most use cases can be implemented with periodic data loads through application programming interfaces (APIs). This approach results in a majority of use cases having decentralized data storage. Pursuing this environment requires two enablers: a central API catalog that defines all APIs available to ensure consistency of approach, and strong group governance for data sharing.
How do we scale the ecosystem, given its heterogeneous and loosely coupled nature?
Enabling rapid and decentralized access to data or data outputs is the key to scaling the ecosystem. This objective can be achieved by having robust governance to ensure that all participants of the ecosystem do the following:
- Make their data assets discoverable, addressable, versioned, and trustworthy in terms of accuracy
- Use self-describing semantics and open standards for data exchange
- Support secure exchanges while allowing access at a granular level
The success of a data-ecosystem strategy depends on data availability and digitization, API readiness to enable integration, data privacy and compliance—for example, General Data Protection Regulation (GDPR)—and user access in a distributed setup. This range of attributes requires companies to design their data architecture to check all these boxes.
As incumbents consider establishing data ecosystems, we recommend they develop a road map that specifically addresses the common challenges. They should then look to define their architecture to ensure that the benefits to participants and themselves come to fruition. The good news is that the data-architecture requirements for ecosystems are not complex. The priority components are identity and access management, a minimum set of tools to manage data and analytics, and central data storage.Truly mentioning , Developing an operative data ecosystem strategy is far more difficult than getting the tech requirements right.
Related Posts
AIQRATIONS
Data Driven Enterprise – Part I: Building an effective Data Strategy for competitive edge
Add Your Heading Text Here
Few Enterprises take full advantage of data generated outside their walls. A well-structured data strategy for using external data can provide a competitive edge. Many enterprises have made great strides in collecting and utilizing data from their own activities. So far, though, comparatively few have realized the full potential of linking internal data with data provided by third parties, vendors, or public data sources. Overlooking such external data is a missed opportunity. Organizations that stay abreast of the expanding external-data ecosystem and successfully integrate a broad spectrum of external data into their operations can outperform other companies by unlocking improvements in growth, productivity, and risk management.
The COVID-19 crisis provides an example of just how relevant external data can be. In a few short months, consumer purchasing habits, activities, and digital behavior changed dramatically, making preexisting consumer research, forecasts, and predictive models obsolete. Moreover, as organizations scrambled to understand these changing patterns, they discovered little of use in their internal data. Meanwhile, a wealth of external data could—and still can—help organizations plan and respond at a granular level. Although external-data sources offer immense potential, they also present several practical challenges. To start, simply gaining a basic understanding of what’s available requires considerable effort, given that the external-data environment is fragmented and expanding quickly. Thousands of data products can be obtained through a multitude of channels—including data brokers, data aggregators, and analytics platforms—and the number grows every day. Analyzing the quality and economic value of data products also can be difficult. Moreover, efficient usage and operationalization of external data may require updates to the organization’s existing data environment, including changes to systems and infrastructure. Companies also need to remain cognizant of privacy concerns and consumer scrutiny when they use some types of external data.
These challenges are considerable but surmountable. This blog series discusses the benefits of tapping external-data sources, illustrated through a variety of examples, and lays out best practices for getting started. These include establishing an external-data strategy team and developing relationships with data brokers and marketplace partners. Company leaders, such as the executive sponsor of a data effort and a chief data and analytics officer, and their data-focused teams should also learn how to rigorously evaluate and test external data before using and operationalizing the data at scale.
External-data success stories: Companies across industries have begun successfully using external data from a variety of sources . The investment community is a pioneer in this space. To predict outcomes and generate investment returns, analysts and data scientists in investment firms have gathered “alternative data” from a variety of licensed and public data sources, many of which draw from the “digital exhaust” of a growing number of technology companies and the public web. Investment firms have established teams that assess hundreds of these data sources and providers and then test their effectiveness in investment decisions.
A broad range of data sources are used, and these inform investment decisions in a variety of ways:
- Investors actively gather job postings, company reviews posted by employees, employee-turnover data from professional networking and career websites, and patent filings to understand company strategy and predict financial performance and organizational growth.
- Analysts use aggregated transaction data from card processors and digital-receipt data to understand the volume of purchases by consumers, both online and offline, and to identify which products are increasing in share. This gives them a better understanding of whether traffic is declining or growing, as well as insights into cross-shopping behaviors.
- Investors study app downloads and digital activity to understand how consumer preferences are changing and how effective an organization’s digital strategy is relative to that of its peers. For instance, app downloads, activity, and rating data can provide a window into the success rates of the myriad of live-streaming exercise offerings that have become available over the last year.
Corporations have also started to explore how they can derive more value from external data . For example, a large insurer transformed its core processes, including underwriting, by expanding its use of external-data sources from a handful to more than 40 in the span of two years. The effort involved was considerable; it required prioritization from senior leadership, dedicated resources, and a systematic approach to testing and applying new data sources. The hard work paid off, increasing the predictive power of core models by more than 20 percent and dramatically reducing application complexity by allowing the insurer to eliminate many of the questions it typically included on customer applications.
Three steps to creating value with external data:
Use of external data has the potential to be game changing across a variety of business functions and sectors. The journey toward successfully using external data has three key steps.
1. Establish a dedicated team for external-data sourcing
To get started, organizations should establish a dedicated data-sourcing team. Per our understanding at AIQRATE , a key role on this team is a dedicated data scout or strategist who partners with the data-analytics team and business functions to identify operational, cost, and growth improvements that could be powered by external data. This person also would be responsible for building excitement around what can be made possible through the use of external data, planning the use cases to focus on, identifying and prioritizing data sources for investigation, and measuring the value generated through use of external data. Ideal candidates for this role are individuals who have served as analytics translators and who have experience in deploying analytics use cases and in working with technology, business, and analytics profiles.
The other team members, who should be drawn from across functions, would include purchasing experts, data engineers, data scientists and analysts, technology experts, and data-review-board members . These team members typically spend only part of their time supporting the data-sourcing effort. For example, the data analysts and data scientists may already be supporting data cleaning and modeling for a specific use case and help the sourcing work stream by applying the external data to assess its value. The purchasing expert, already well versed in managing contracts, will build specialization on data-specific licensing approaches to support those efforts.
Throughout the process of finding and using external data, companies must keep in mind privacy concerns and consumer scrutiny, making data-review roles essential peripheral team members. Data reviewers, who typically include legal, risk, and business leaders, should thoroughly vet new consumer data sets—for example, financial transactions, employment data, and cell-phone data indicating when and where people have entered retail locations. The vetting process should ensure that all data were collected with appropriate permissions and will be used in a way that abides by relevant data-privacy laws and passes muster with consumer.This team will need a budget to procure small exploratory data sets, establish relationships with data marketplaces (such as by purchasing trial licenses), and pay for technology requirements (such as expanded data storage).
2. Develop relationships with data marketplaces and aggregators
While online searches may appear to be an easy way for data-sourcing teams to find individual data sets, that approach is not necessarily the most effective. It generally leads to a series of time-consuming vendor-by-vendor discussions and negotiations. The process of developing relationships with a vendor, procuring sample data, and negotiating trial agreements often takes months. A more effective strategy involves using data-marketplace and -aggregation platforms that specialize in building relationships with hundreds of data sources, often in specific data domains—for example, consumer, real-estate, government, or company data. These relationships can give organizations ready access to the broader data ecosystem through an intuitive search-oriented platform, allowing organizations to rapidly test dozens or even hundreds of data sets under the auspices of a single contract and negotiation. Since these external-data distributors have already profiled many data sources, they can be valuable thought partners and can often save an external-data team significant time. When needed, these data distributors can also help identify valuable data products and act as the broker to procure the data.
Once the team has identified a potential data set, the team’s data engineers should work directly with business stakeholders and data scientists to evaluate the data and determine the degree to which the data will improve business outcomes. To do so, data teams establish evaluation criteria, assessing data across a variety of factors to determine whether the data set has the necessary characteristics for delivering valuable insights . Data assessments should include an examination of quality indicators, such as fill rates, coverage, bias, and profiling metrics, within the context of the use case. For example, a transaction data provider may claim to have hundreds of millions of transactions that help illuminate consumer trends. However, if the data include only transactions made by millennial consumers, the data set will not be useful to a company seeking to understand broader, generation-agnostic consumer trends.
3. Prepare the data architecture for new external-data streams
Generating a positive return on investment from external data calls for up-front planning, a flexible data architecture, and ongoing quality-assurance testing.Up-front planning starts with an assessment of the existing data environment to determine how it can support ingestion, storage, integration, governance, and use of the data. The assessment covers issues such as how frequently the data come in, the amount of data, how data must be secured, and how external data will be integrated with internal data. This will provide insights about any necessary modifications to the data architecture.
Modifications should be designed to ensure that the data architecture is flexible enough to support the integration of a continuous “conveyor belt” of incoming data from a variety of data sources—for example, by enabling application-programming-interface (API) calls from external sources along with entity-resolution capabilities to intelligently link the external data to internal data. In other cases, it may require tooling to support large-scale data ingestion, querying, and analysis. Data architecture and underlying systems can be updated over time as needs mature and evolve.The final process in this step is ensuring an appropriate and consistent level of quality by constantly monitoring the data used. This involves examining data regularly against the established quality framework to identify whether the source data have changed and to understand the drivers of any changes (for example, schema updates, expansion of data products, change in underlying data sources). If the changes are significant, algorithmic models leveraging the data may need to be retrained or even rebuilt.
Minimizing risk and creating value with external data will require a unique mix of creative problem solving, organizational capability building, and laser-focused execution. That said, business leaders who demonstrate the achievements possible with external data can capture the imagination of the broader leadership team and build excitement for scaling beyond early pilots and tests. An effective route is to begin with a small team that is focused on using external data to solve a well-defined problem and then use that success to generate momentum for expanding external-data efforts across the organization.
Related Posts
AIQRATIONS
Redefine the new code for GCCs: Winning with AI – strategic perspectives
Add Your Heading Text Here
Global Capability Centers( GCCs) are reflections of strategic components to parent organization’s business imperatives. GCCs are at an inflection point as the pace at which AI is changing every aspect is exponential and at high velocity. The rapid transformation and innovation of GCCs today is driven largely by ability for them to position AI strategic imperative for their parent organizations. AI is seen to the Trojan horse to catapult GCCs to the next level on innovation & transformation. In recent times; GCC story is in a changing era of value and transformative arbitrage.
Most of the GCCs are aiming towards deploying suite of AI led strategies to position themselves up as the model template of AI Center of Excellence. It is widely predicted that AI will disrupt and transform capability centers in the coming decades. How are Global Capability Centers in India looking at positioning themselves as model template for developing AI center of competence? How have the strategies of GCCs transformed with reference to parent organization? whilst delivering tangible business outcomes, innovation & transformation for parent organizations?
Strategic imperatives for GCC’s to consider to move incrementally in the value chain & develop and edge and start winning with AI:
AI transformation :
Artificial Intelligence has become the main focus areas for GCCs in India. The increasing digital penetration in GCCs across business verticals has made it imperative to focus on AI. Hence, GCCs are upping their innovation agenda by building bespoke AI capabilities , solutions & offerings. Accelerated AI adoption has transcended industry verticals, with organizations exploring different use cases and application areas. GCCs in India are strategically leveraging one of the following approaches to drive the AI penetration ahead –
- Federated Approach: Different teams within GCCs drive AI initiatives
- Centralized Approach: Focus is to build a central team with top talent and niche skills that would cater to the parent organization requirements
- Partner ecosystem : Paves a new channel for GCCs by partnering with research institutes , start-ups , accelerators
- Hybrid Approach: A mix of any two or more above mentioned approaches, and can be leveraged according to GCCs needs and constraints.
- Ecosystem creation : Startups /research institutes/Accelerators
One of the crucial ways that GCCs can boost their innovation agenda is by collaborating with start-ups, research institutes , accelerators. Hence, GCCs are employing a variety of strategies to build the ecosystem. These collaborations are a combination of build, buy, and partner models:
- Platform Evangelization: GCCs offer access to their AI platforms to start-ups
- License or Vendor Agreement: GCCs and start-ups enter into a license agreement to create solutions
- Co-innovate: Start-ups and GCCs collaborate to co-create new solutions & capabilities
- Acqui-hire: GCCs acquire start-ups for the talent & capability
- Research centers : GCCs collaborate with academic institutes for joint IP creation, open research , customized programs
- Joint Accelerator program : GCCs & Accelerators build joint program for customized startups cohort
To drive these ecosystem creation models, GCCs can leverage different approaches. Further, successful collaboration programs have a high degree of customization, with clearly defined objectives and talent allocation to drive tangible and impact driven business outcomes.
Differentiated AI Center of Capability :
GCCs are increasingly shifting to competency, capability creation models to reduce time-to-market. In this model, the AI Center of Competence teams are aligned to capability lines of businesses where AI center of competence are responsible for creating AI capabilities, roadmaps and new value offerings, in collaboration with parent organization’s business teams. This alignment and specific roles have clear visibility of the business user requirement. Further, capability creation combined with parent organization’s alignment helps in tangible value outcomes. In several cases, AI teams are building new range of innovation around AI based capabilities and solutions to showcase ensuing GCC as model template for innovation & transformation. GCCs need to conceptualize a bespoke strategy for building and sustaining AI Center of Competence and keep it up on the value chain with mature and measured transformation & innovation led matrices.
AI Talent Mapping Strategy:
With the evolution of analytics ,data sciences to AI, the lines between different skills are blurring. GCCs are witnessing a convergence of skills required across verticals. The strategic shift of GCCs towards AI center of capability model has led to the creation of AI, data engineering & design roles. To build skills in AI & data engineering, GCCs are adopting a hybrid approach. The skill development roadmap for AI is a combination of build and buy strategies. The decision to acquire talent from the ecosystem or internally build capabilities is a function of three parameters – Maturity of GCC s existing AI capabilities in the desired or adjacent areas ,Tactical nature of skill requirement & Availability and accessibility of talent in the ecosystem. There’s always a heavy Inclination towards building skills in-house within GCCs and a majority of GCCs have stressed upon that the bulk of the future deployment in AI areas will be through in-house skill-building and reskilling initiatives. However, talent mapping strategy for building AI capability is a measured approach else can result in being a Achilles heel for GCC and HR leaders.
GCCs in India are uniquely positioned to drive the next wave of growth with building high impact AI center of competence , there are slew of innovative & transformative models that they are working upon to up the ante and trigger new customer experience , products & services and unleash business transformation for the parent organizations. This will not only set the existing GCCs on the path to cutting-edge innovation but also pave the way for other global organizations contemplating global center setup in India.AI is becoming front runner to drive innovation & transformation for GCCs.
Related Posts
AIQRATIONS
Cloud Platforms: Strategic Enabler for AI led Transformation
Add Your Heading Text Here
CIOs & CTOs have been toying with the idea of cloud adoption at scale for more than a decade since the first corporate experiments with external cloud platforms were conceptualized, and the verdict is long in on their business value. Companies that adopt the cloud well bring new capabilities to market more quickly, innovate more easily, and scale more efficiently—while also reducing technology risk.
Unfortunately, the verdict is still out on what constitutes a successful cloud implementation to actually capture that value. Most CIOs and CTOs default to traditional implementation models that may have been successful in the past but that make it almost impossible to capture the real value from the cloud. Defining the cloud opportunity too narrowly with siloed business initiatives, such as next-generation application hosting or data platforms, almost guarantees failure. That’s because no design consideration is given to how the organization will need to operate holistically in cloud, increasing the risk of disruption from nimbler attackers with modern technology platforms that enable business agility and innovation.
Companies that reap value from cloud platforms treat their adoption as a business- AI led transformation by doing three things:
- Focusing investments on business domains where cloud can enable increased revenues and improved margins
2. Selecting a technology and sourcing model that aligns with business strategy and risk constraints
3. Developing and implementing an operating model that is oriented around the cloud
CIOs and CTOs need to drive cloud adoption, but, given the scale and scope of change required to exploit this opportunity fully, they also need support and air cover from the rest of the management team.
Using cloud to enable AI led transformation : Only 14 percent of companies launching AI transformations have seen sustained and material performance improvements. Why? Technology execution capabilities are often not up to the task. Outdated AI technology environments make change expensive. Quarterly release cycles make it hard to tune AI capabilities to changing market demands. Rigid and brittle infrastructures choke on the data required for sophisticated analytics.
Operating in the cloud can reduce or eliminate many of these issues. Exploiting cloud services and tooling, however, requires change across all of IT and many business functions as well—in effect, a different business-technology model.
AI led transformation success requires CIOs and tech leaders to do three things :
1. Focus cloud investments in business domains where cloud platforms can enable increased revenues and improved margins:
The vast majority of the value the cloud generates comes from increased agility, innovation, and resilience provided to the business with sustained velocity. In most cases, this requires focusing cloud adoption on embedding re usability and composability so investment in modernizing can be rapidly scaled across the rest of the organization. This approach can also help focus programs on where the benefits matter most, rather than scrutinizing individual applications for potential cost savings
Faster time to market: Cloud-native companies can release code into production hundreds or thousands of times per day using end-to-end automation. Even traditional enterprises have found that automated cloud platforms allow them to release new capabilities daily, enabling them to respond to market demands and quickly test what does and doesn’t work. As a result, companies that have adopted cloud platforms report that they can bring new capabilities to market about 20 to 40 percent faster.
Ability to create innovative business offerings: Each of the major cloud service providers offers hundreds of native services and marketplaces that provide access to third-party ecosystems with thousands more. These services rapidly evolve and grow and provide not only basic infrastructure capabilities but also advanced functionality such as facial recognition, natural-language processing, quantum computing, and data aggregation.
Reduced risk: Cloud clearly disrupts existing security practices and architectures but also provides a rare opportunity to eliminate vast operational overhead to those that can design their platforms to consume cloud securely. Taking advantage of the multi billion-dollar investments CSPs have made in security operations requires a cyber-first design that automatically embeds robust standardized authentication, hardened infrastructure, and a resilient interconnected data-center availability zone.
Efficient scalability: Cloud enables companies to automatically add capacity to meet surge demand (in response to increasing customer usage, for example) and to scale out new services in seconds rather than the weeks it can take to procure additional on-premises servers. This capability has been particularly crucial during the COVID-19 pandemic, when the massive shift to digital channels created sudden and unprecedented demand peaks.
2. Select a technology, sourcing, and migration model that aligns with business and risk constraints
Decisions about cloud architecture and sourcing carry significant risk and cost implications—to the tune of hundreds of millions of dollars for large companies. The wrong technology and sourcing decisions will raise concerns about compliance, execution success, cyber security, and vendor risk—more than one large company has stopped its cloud program cold because of multiple types of risk. The right technology and source decisions not only mesh with the company’s risk appetite but can also “bend the curve” on cloud-adoption costs, generating support and excitement for the program across the management team.
If CIOs or CTOs make those decisions based on the narrow criteria of IT alone, they can create significant issues for the business. Instead, they must develop a clear picture of the business strategy as it relates to technology cost, investment, and risk.
3. Change operating models to capture cloud value
Capturing the value of migrating to the cloud requires changing both how IT works and how IT works with the business. The best CIOs and CTOs follow a number of principles in building a cloud-ready operating model:
Make everything a product : To optimize application functionality and mitigate technical debt,CIOs need to shift from “IT projects” to “products”—the technology-enabled offerings used by customers and employees. Most products will provide business capabilities such as order capture or billing. Automated as-a-service platforms will provide underlying technology services such as data management or web hosting. This approach focuses teams on delivering a finished working product rather than isolated elements of the product. This more integrated approach requires stable funding and a “product owner” to manage it.
Integrate with business. Achieving the speed and agility that cloud promises requires frequent interaction with business leaders to make a series of quick decisions. Practically, business leaders need to appoint knowledgeable decision makers as product owners for business-oriented products. These are people who have the knowledge and authority to make decisions about how to sequence business functionality as well as the understanding of the journeys of their “customers.”
Drive cloud skill sets across development teams. Traditional centers of excellence charged with defining configurations for cloud across the entire enterprise quickly get overwhelmed. Instead, top CIOs invest in delivery designs that embed mandatory self-service and co-creation approaches using abstracted, unified ways of working that are socialized using advanced training programs (such as “train the trainer”) to embed cloud knowledge in each agile tribe and even squad.
How Technology Leaders can join forces with leadership to drive AI led transformation
Given the economic and organizational complexity required to get the greatest benefits from the cloud, heads of infrastructure, CIOs, and CTOs need to engage with the rest of the leadership team. That engagement is especially important in the following areas:
Technology funding. Technology funding mechanisms frustrate cloud adoption—they prioritize features that the business wants now rather than critical infrastructure investments that will allow companies to add functionality more quickly and easily in the future. Each new bit of tactical business functionality built without best-practice cloud architectures adds to your technical debt—and thus to the complexity of building and implementing anything in the future. CIOs and CTOs need support from the rest of the management team to put in place stable funding models that will provide resources required to build underlying capabilities and remediate applications to run efficiently, effectively, and safely in the cloud.
Business-technology collaboration. Getting value from cloud platforms requires knowledgeable product owners with the power to make decisions about functionality and sequencing. That won’t happen unless the CEO and relevant business-unit heads mandate people in their organizations to be product owners and provide them with decision-making authority.
Engineering talent. Adopting the cloud requires specialized and sometimes hard-to-find technical talent—full-stack developers, data engineers, cloud-security engineers, identity and access-management specialists, cloud engineers, and site-reliability engineers. Unfortunately, some policies put in place a decade ago to contain IT costs can get in the way of on boarding cloud talent. Companies have adopted policies that limit costs per head and the number of senior hires, for example, which require the use of outsourced resources in low-cost locations. Collectively, these policies produce the reverse of what the cloud requires, which is a relatively small number of highly talented and expensive people who may not want to live in traditionally low-cost IT locations. CIOs and CTOs need changes in hiring and location policies to recruit and retain the talent needed for success in the cloud.
The recent COVID-19 pandemic has only heightened the need for companies to adopt AI led business models. Only cloud platforms can provide the required agility, scalability, and innovative capabilities required for this transition. While there have been frustrations and false starts in the enterprise cloud journey, companies can dramatically accelerate their progress by focusing cloud investments where they will provide the most business value and building cloud-ready operating models.
Related Posts
AIQRATIONS
Best Practices to Accelerate & Transform Analytics Adoption in the Cloud
Add Your Heading Text Here
Reimagining analytics in the cloud enables enterprises to achieve greater agility, increase scalability and optimize costs. But organizations take different paths to achieving their goals. The best way to proceed will depend on data environment and business objectives. There are two best practices to maximize analytics adoption in the cloud:
• Cloud Data Warehouse, Data Lake, and Lakehouse Transformation: Strategically moving data warehouse and data lake to the cloud over time and adopting a modern, end-to-end data infrastructure for AI, and machine learning projects.
• New Cloud Data Warehouse and Data Lake: Start small and fast and grow as needed by spinning up a new cloud data warehouse or cloud data lake. The same guidance applies whether implementing new data warehouses and data lakes in the cloud for the first time, or doing so for an individual department or line of business.
As cloud adoption grows, most organizations will eventually want to modernize their enterprise analytics infrastructure entirely in the cloud. With the transformation pathway, rebuild everything to take advantage of the most modern cloud-based enterprise data warehouse, data lake, and lake house technology to end up in the strongest position long term. But migrate data and workloads from existing on-premises enterprise data warehouse and data lake to the cloud incrementally, over time. This approach allows enterprises to be strategic while minimizing disruption. Enterprises can take the time to carefully evaluate data and bring over only what is needed, which makes this a less risky approach. It also enables more complex analysis of data, using artificial intelligence, machine learning. The combination of a cloud data warehouse and data lake allows to manage the data necessary for analytics by providing economical scalability across compute and storage that is not possible with an on-premises infrastructure. And it enables to incorporate new types of data, from IoT sensors, social media, text, and more, into your analysis to gain new insights.
For this pathway ,enterprises need an intelligent, automated data platform that delivers a number of critical capabilities. It should handle new data sources, accommodate AI and machine learning projects, support new processing engines, deliver performance at a massive scale, and offer serverless scale up/scale down capabilities. As with a brand-new cloud data warehouse or data lake, enterprises need cloud-native, best-of-breed data integration, data quality, and metadata management to ensure maximizing the value of cloud analytics. Once the data is in the cloud, organization can provide users with self-service access to this data so they can more easily and seamlessly create reports or take swift decision. Subsequently , this transformation pathway gives organizations an end-to-end modern infrastructure for next-generation cloud analytics
Lines of business increasingly rely on analytics to improve processes and business impact. For example, sales and marketing no longer ask, “How many leads did we generate?” They want to know how many sales-ready leads we gathered from Global 500 accounts as evidenced by user time spent consuming content on the web. But individual lines of business may not have the time or resources to create and maintain an on-premises data warehouse to answer these questions. With a new cloud data warehouse and data lake, departments can get analytics projects off the ground quickly and cost effectively. Departments simply spin up their own cloud data warehouses, populate them with data, and make sure they’re connected to analytics and BI tools. For data science projects, a team may want to quickly add a cloud data lake. In some cases, this approach enables the team to respond to requests for sophisticated analysis faster than centralized teams can normally handle. Whatever the purpose of new cloud data warehouse and data lake, enterprises need intelligent, automated cloud data management with best of-breed, cloud-native data integration, data quality, and metadata management all built on a cloud-native platform in order to deliver value and drive ROI. And note that while this approach allows enterprises to start small and scale as needed, the downside is that data warehouse and data lake may only benefit a particular department inside the enterprise.
Some organizations with significant investments in on-premises enterprise data warehouses and data lakes are looking to simply replicate their existing systems to the cloud. By lifting and shifting their data warehouse or data lake “as is” to the cloud, they seek to improve flexibility, increase scalability, and lower data center costs while migrating quickly to minimize disruption. Lifting and shifting an on-premises system to the cloud may seem fast and safe. But in reality, it’s an inefficient approach, one that’s like throwing everything you own into a moving van instead of packing strategically for a plane trip. In the long run, reducing baggage and traveling by air delivers greater agility and faster results because you are not weighed down by unnecessary clutter. Some organizations may need to do a lift and shift, but most will find it’s not the best course of action because it simply persists outdated or inefficient legacy systems and offers little in the way of innovation.
Related Posts
AIQRATIONS
CXO Insights: Establishing AI fluency with Boards – The new strategic imperative
Add Your Heading Text Here
Though a rhetorical theme , We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI — a discussion that’s relevant whether organizations are developing AI systems or buying AI-powered software. With AI adoption in increasingly widespread use, it’s time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization’s overall mission and risk management.
According to a recent global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations “comprehensively identify and prioritize” the risks associated with AI deployment. Board members recognize that this task is on their agendas: According to the 2019 National Association of Corporate Directors (NACD) Blue Ribbon Commission report, “Fit for the Future: An Urgent Imperative for Board Leadership,” 86% of board members “fully expect to deepen their engagement with management on new drivers of growth and risk in the next five years.”
Why’s this an imperative ? Because AI’s potential to deliver significant benefits comes with new and complex risks. For example, the frequency with which AI-driven facial recognition technologies misidentify nonwhite or female faces is among the issues that have driven a pullback by major vendors — which are also concerned about the use of the technology for mass surveillance and consequent civil rights violations. Recently, IBM stopped selling the facial technology altogether. Further, Microsoft said it would not sell its facial recognition technology to police departments until Congress passes a federal law regulating its use by law enforcement. Similarly, Amazon said it would not allow police use of its technology for a year, to allow time for legislators to act.
The use of AI-driven facial recognition technology in policing is just one notorious example, however. Virtually all AI systems & platforms in use today may be vulnerable to problems that result from the nature of the data used to train and operate them, the assumptions made in the algorithms themselves, the lack of system controls, and the lack of diversity in the human teams that build, instruct, and deploy them.Many of the decisions that will determine how these technologies work, and what their impact will be, take place largely outside of the board’s view — despite the strategic, operational, and legal risks they present. Nonetheless, boards are charged with overseeing and supporting management in better managing AI risks.
Increasing the board’s fluency with and visibility into these issues is just good governance. A board, its committees, and individual directors can approach this as a matter of strict compliance, strategic planning, or traditional legal and business risk oversight. They might also approach AI governance through the lens of environment, social, and governance (ESG) considerations: As the board considers enterprise activity that will affect society, AI looms large. The ESG community is increasingly making the case that AI needs to be added to the board’s portfolio.
How Boards can assess the quality & impact of AI
Directors’ duties of care and loyalty are familiar and well established. They include the obligations to act in good faith, be sufficiently informed, and exercise due care in oversight over strategy, risk, and compliance.
Boards assessing the quality and impact of AI and oversight is required should understand the following:
- AI is more than an issue for the technology team. Its impact resonates across the organization and implicates those managing legal, marketing, and human resources functions, among others.
- AI is not a siloed thing. It is a system comprising the technology itself, the human teams who manage and interact with it, and the data upon which it runs.
- AI systems need the accountability of C-level strategy and oversight. They are highly complex and contextual and cannot be trustworthy without integrated, strategic guidance and management.
- AI is not static. It is designed to adapt quickly and thus requires continuous oversight.
- The AI systems most in use by business today are efficient and powerful prediction engines. They generate these predictions based on data sets that are selected by engineers, who use them to train and feed algorithms that are, in turn, optimized on goals articulated — most often — by those developers. Those individuals succeed when they build technology that works, on time and within budget. Today, the definition of effective design for AI may not necessarily include guardrails for its responsible use, and engineering groups typically aren’t resourced to take on those questions or to determine whether AI systems operate consistently with the law or corporate strategies and objectives.
The choices made by AI developers — or by an HR manager considering a third-party resume-screening algorithm, or by a marketing manager looking at an AI-driven dynamic pricing system — are significant. Some of these choices may be innocuous, but some are not, such as those that result in hard-to-detect errors or bias that can suppress diversity or that charge customers different prices based on gender. Board oversight must include requirements for policies at both the corporate level and the use-case level that delineate what AI systems will and will not be used for. It must also set standards by which their operation, safety, and robustness can be assessed. Those policies need to be backed up by practical processes, strong culture, and compliance structures.
Enterprises may be held accountable for whether their uses of algorithm-driven systems comply with well-established anti-discrimination rules. The U.S. Department of Housing and Urban Development recently charged Facebook with violations of the federal Fair Housing Act for its use of algorithms to determine housing-related ad-targeting strategies based on protected characteristics such as race, national origin, religion, familial status, sex, and disabilities. California courts have held that the Unruh Civil Rights Act of 1959 applies to online businesses’ discriminatory practices. The legal landscape also is adapting to the increasing sophistication of AI and its applications in a wide array of industries beyond the financial sector. For instance, the FTC is calling for the “transparent, explainable, fair, and empirically sound” use of AI tools and demanding accountability and standards. The Department of Justice’s Criminal Division’s updated guidance underscores that an adequate corporate compliance program is a factor in sentencing guidelines.
From the board’s perspective, compliance with existing rules is an obvious point, but it is also important to keep up with evolving community standards regarding the appropriate duty of care as these technologies become more prevalent and better understood. Further, even after rules are in force, applying them in particular business settings to solve specific business problems can be difficult and intricate. Boards need to confirm that management is sufficiently focused and resourced to manage compliance well, along with AI’s broader strategic trade-offs and risks.
Risks to brand and reputation. The issue of brand integrity — clearly a current board concern — may most likely drive AI accountability in the short term. Recent issues faced by individuals charged with advancing responsible AI within companies found that the “most prevalent incentives for action were catastrophic media attention and decreasing media tolerance for the status quo.” Well before new laws and regulations are in effect, company stakeholders such as customers, employees, and the public are forming opinions about how an organization uses AI. As these technologies penetrate further into business and the home, their impact will increasingly define a brand’s reputation for trust, quality, and its mission.
The role of AI in exacerbating racial, gender, and cultural inequities is inescapable. Addressing these issues within the technology is necessary, but it is not sufficient. Without question, we can move forward only with genuine commitments to diversity and inclusion at all levels of technology development and technology consumption.
Business continuity concerns. Boards and executives are already keenly aware that technology-dependent enterprises are vulnerable to disruption when systems fail or go wrong, and AI raises new board-worthy considerations on this score. First, many AI systems rely on numerous and unknown third-party technologies, which might threaten reliability if any element is faulty, orphaned, or inadequately supported. Second, AI carries the potential of new kinds of cyber threats, requiring new levels of coordination within any enterprise. And bear in mind that many AI developers will tell you that they don’t really know what an AI system will do until it does it — and that AI that “goes bad,” or cannot be trusted, will need remediation and may have to be pulled out of production or off the market.
The ”NEW” strategic imperative for Boards
Regardless of how a board decides to approach AI fluency, it will play a critical role in considering the impact of the AI technologies that a business chooses to use. Before specific laws are in effect, and even well after they are written, businesses will be making important decisions about how to use these tools, how they will impact their workforces, and when to rely upon them in lieu of human judgment. The hardest questions a board will face about proposed AI applications are likely to be “Should we adopt AI in this way?” and “What is our duty to understand how that function is consistent with all of our other beliefs, missions, and strategic objectives?” Boards must decide where they want management to draw the line: for example, to identify and reject an AI-generated recommendation that is illegal or at odds with organizational values .
Boards should do the following in order to establish adequate AI fluency mechanics:
- Learn where in the organization AI and other exponential technologies are being used or planning to be used, and why.
- Set a regular cadence for management to report on policies and processes for governing these technologies specifically, and for setting standards for AI procurement and deployment, training, compliance, and oversight.
- Encourage the appointment of a C-level executive to be responsible for this work, across company functions.
- Encourage adequate resourcing and training of the oversight function.
It’s not too soon for boards to begin this work; even for enterprises with little investment in AI development, it will find its way into the organization through AI-infused tools and services. The legal, strategic, and brand risks of AI are sufficiently grave that boards need facility with them and a process by which they can work with management to contain the risks while reaping the rewards.AI Fluency is the new strategic agenda.
Related Posts
AIQRATIONS
Managing Bias in AI: Strategic Risk Management Strategy for Banks
Add Your Heading Text Here
AI is set to transform the banking industry, using vast amounts of data to build models that improve decision making, tailor services, and improve risk management. According to the EIU, this could generate value of more than $250 billion in the banking industry. But there is a downside, since ML models amplify some elements of model risk. And although many banks, particularly those operating in jurisdictions with stringent regulatory requirements, have validation frameworks and practices in place to assess and mitigate the risks associated with traditional models, these are often insufficient to deal with the risks associated with machine-learning models. The added risk brought on by the complexity of algorithmic models can be mitigated by making well-targeted modifications to existing validation frameworks.
Conscious of the problem, many banks are proceeding cautiously, restricting the use of ML models to low-risk applications, such as digital marketing. Their caution is understandable given the potential financial, reputational, and regulatory risks. Banks could, for example, find themselves in violation of anti discrimination laws, and incur significant fines—a concern that pushed one bank to ban its HR department from using a machine-learning resume screener. A better approach, however, and ultimately the only sustainable one if banks are to reap the full benefits of machine-learning models, is to enhance model-risk management.
Regulators have not issued specific instructions on how to do this. In the United States, they have stipulated that banks are responsible for ensuring that risks associated with machine-learning models are appropriately managed, while stating that existing regulatory guidelines, such as the Federal Reserve’s “Guidance on Model Risk Management” (SR11-7), are broad enough to serve as a guide. Enhancing model-risk management to address the risks of machine-learning models will require policy decisions on what to include in a model inventory, as well as determining risk appetite, risk tiering, roles and responsibilities, and model life-cycle controls, not to mention the associated model-validation practices. The good news is that many banks will not need entirely new model-validation frameworks. Existing ones can be fitted for purpose with some well-targeted enhancements.
New Risk mitigation exercises for ML models
There is no shortage of news headlines revealing the unintended consequences of new machine-learning models. Algorithms that created a negative feedback loop were blamed for the “flash crash” of the British pound by 6 percent in 2016, for example, and it was reported that a self-driving car tragically failed to properly identify a pedestrian walking her bicycle across the street. The cause of the risks that materialized in these machine-learning models is the same as the cause of the amplified risks that exist in all machine-learning models, whatever the industry and application: increased model complexity. Machine-learning models typically act on vastly larger data sets, including unstructured data such as natural language, images, and speech. The algorithms are typically far more complex than their statistical counterparts and often require design decisions to be made before the training process begins. And machine-learning models are built using new software packages and computing infrastructure that require more specialized skills. The response to such complexity does not have to be overly complex, however. If properly understood, the risks associated with machine-learning models can be managed within banks’ existing model-validation frameworks
Here are the strategic approaches for enterprises to ensure that that the specific risks associated with machine learning are addressed :
Demystification of “Black Boxes” : Machine-learning models have a reputation of being “black boxes.” Depending on the model’s architecture, the results it generates can be hard to understand or explain. One bank worked for months on a machine-learning product-recommendation engine designed to help relationship managers cross-sell. But because the managers could not explain the rationale behind the model’s recommendations, they disregarded them. They did not trust the model, which in this situation meant wasted effort and perhaps wasted opportunity. In other situations, acting upon (rather than ignoring) a model’s less-than-transparent recommendations could have serious adverse consequences.
The degree of demystification required is a policy decision for banks to make based on their risk appetite. They may choose to hold all machine-learning models to the same high standard of interpretability or to differentiate according to the model’s risk. In USA, models that determine whether to grant credit to applicants are covered by fair-lending laws. The models therefore must be able to produce clear reason codes for a refusal. On the other hand, banks might well decide that a machine-learning model’s recommendations to place a product advertisement on the mobile app of a given customer poses so little risk to the bank that understanding the model’s reasons for doing so is not important. Validators need also to ensure that models comply with the chosen policy. Fortunately, despite the black-box reputation of machine-learning models, significant progress has been made in recent years to help ensure their results are interpretable. A range of approaches can be used, based on the model class:
Linear and monotonic models (for example, linear-regression models): linear coefficients help reveal the dependence of a result on the output. Nonlinear and monotonic models, (for example, gradient-boosting models with monotonic constraint): restricting inputs so they have either a rising or falling relationship globally with the dependent variable simplifies the attribution of inputs to a prediction. Nonlinear and nonmonotonic (for example, unconstrained deep-learning models): methodologies such as local interpretable model-agnostic explanations or Shapley values help ensure local interpretability.
Bias : A model can be influenced by four main types of bias: sample, measurement, and algorithm bias, and bias against groups or classes of people. The latter two types, algorithmic bias and bias against people, can be amplified in machine-learning models. For example, the random-forest algorithm tends to favor inputs with more distinct values, a bias that elevates the risk of poor decisions. One bank developed a random-forest model to assess potential money-laundering activity and found that the model favored fields with a large number of categorical values, such as occupation, when fields with fewer categories, such as country, were better able to predict the risk of money laundering.
To address algorithmic bias, model-validation processes should be updated to ensure appropriate algorithms are selected in any given context. In some cases, such as random-forest feature selection, there are technical solutions. Another approach is to develop “challenger” models, using alternative algorithms to benchmark performance. To address bias against groups or classes of people, banks must first decide what constitutes fairness. Four definitions are commonly used, though which to choose may depend on the model’s use: Demographic blindness: decisions are made using a limited set of features that are highly uncorrelated with protected classes, that is, groups of people protected by laws or policies. Demographic parity: outcomes are proportionally equal for all protected classes. Equal opportunity: true-positive rates are equal for each protected class. Equal odds: true-positive and false-positive rates are equal for each protected class. Validators then need to ascertain whether developers have taken the necessary steps to ensure fairness. Models can be tested for fairness and, if necessary, corrected at each stage of the model-development process, from the design phase through to performance monitoring.
Feature engineering : is often much more complex in the development of machine-learning models than in traditional models. There are three reasons why. First, machine-learning models can incorporate a significantly larger number of inputs. Second, unstructured data sources such as natural language require feature engineering as a preprocessing step before the training process can begin. Third, increasing numbers of commercial machine-learning packages now offer so-called AutoML, which generates large numbers of complex features to test many transformations of the data. Models produced using these features run the risk of being unnecessarily complex, contributing to overfitting. For example, one institution built a model using an AutoML platform and found that specific sequences of letters in a product application were predictive of fraud. This was a completely spurious result caused by the algorithm’s maximizing the model’s out-of-sample performance.
In feature engineering, banks have to make a policy decision to mitigate risk. They have to determine the level of support required to establish the conceptual soundness of each feature. The policy may vary according to the model’s application. For example, a highly regulated credit-decision model might require that every individual feature in the model be assessed. For lower-risk models, banks might choose to review the feature-engineering process only: for example, the processes for data transformation and feature exclusion. Validators should then ensure that features and/or the feature-engineering process are consistent with the chosen policy. If each feature is to be tested, three considerations are generally needed: the mathematical transformation of model inputs, the decision criteria for feature selection, and the business rationale. For instance, a bank might decide that there is a good business case for using debt-to-income ratios as a feature in a credit model but not frequency of ATM usage, as this might penalize customers for using an advertised service.
Hyper parameters : Many of the parameters of machine-learning models, such as the depth of trees in a random-forest model or the number of layers in a deep neural network, must be defined before the training process can begin. In other words, their values are not derived from the available data. Rules of thumb, parameters used to solve other problems, or even trial and error are common substitutes. Decisions regarding these kinds of parameters, known as hyper parameters, are often more complex than analogous decisions in statistical modeling. Not surprisingly, a model’s performance and its stability can be sensitive to the hyper parameters selected. For example, banks are increasingly using binary classifiers such as support-vector machines in combination with natural-language processing to help identify potential conduct issues in complaints. The performance of these models and the ability to generalize can be very sensitive to the selected kernel function.Validators should ensure that hyper parameters are chosen as soundly as possible. For some quantitative inputs, as opposed to qualitative inputs, a search algorithm can be used to map the parameter space and identify optimal ranges. In other cases, the best approach to selecting hyperparameters is to combine expert judgment and, where possible, the latest industry practices.
Production readiness : Traditional models are often coded as rules in production systems. Machine-learning models, however, are algorithmic, and therefore require more computation. This requirement is commonly overlooked in the model-development process. Developers build complex predictive models only to discover that the bank’s production systems cannot support them. One US bank spent considerable resources building a deep learning–based model to predict transaction fraud, only to discover it did not meet required latency standards. Validators already assess a range of model risks associated with implementation. However, for machine learning, they will need to expand the scope of this assessment. They will need to estimate the volume of data that will flow through the model, assessing the production-system architecture (for example, graphics-processing units for deep learning), and the runtime required.
Dynamic model calibration : Some classes of machine-learning models modify their parameters dynamically to reflect emerging patterns in the data. This replaces the traditional approach of periodic manual review and model refresh. Examples include reinforcement-learning algorithms or Bayesian methods. The risk is that without sufficient controls, an overemphasis on short-term patterns in the data could harm the model’s performance over time. Banks therefore need to decide when to allow dynamic recalibration. They might conclude that with the right controls in place, it is suitable for some applications, such as algorithmic trading. For others, such as credit decisions, they might require clear proof that dynamic recalibration outperforms static models. With the policy set, validators can evaluate whether dynamic recalibration is appropriate given the intended use of the model, develop a monitoring plan, and ensure that appropriate controls are in place to identify and mitigate risks that might emerge. These might include thresholds that catch material shifts in a model’s health, such as out-of-sample performance measures, and guardrails such as exposure limits or other, predefined values that trigger a manual review.
Banks will need to proceed gradually. The first step is to make sure model inventories include all machine learning–based models in use. One bank’s model risk-management function was certain the organization was not yet using machine-learning models, until it discovered that its recently established innovation function had been busy developing machine-learning models for fraud and cyber security.
From here, validation policies and practices can be modified to address machine-learning-model risks, though initially for a restricted number of model classes. This helps build experience while testing and refining the new policies and practices. Considerable time will be needed to monitor a model’s performance and finely tune the new practices. But over time banks will be able to apply them to the full range of approved machine-learning models, helping companies mitigate risk and gain the confidence to start harnessing the full power of machine learning.
(AIQRATE, A bespoke global AI advisory and consulting firm. A first in its genre, AIQRATE provides strategic AI advisory services and consulting offerings across multiple business segments to enable clients on their AI powered transformation & innovation journey and accentuate their decision making and business performance.
AIQRATE works closely with Boards, CXOs and Senior leaders advising them on navigating their Analytics to AI journey with the art of possible or making them jump start to AI progression with AI@scale approach followed by consulting them on embedding AI as core to business strategy within business functions and augmenting the decision-making process with AI. We have proven bespoke AI advisory services to enable CXO’s and Senior Leaders to curate & design building blocks of AI strategy, embed AI@scale interventions and create AI powered organizations. AIQRATE’s path breaking 50+ AI consulting frameworks, assessments, primers, toolkits and playbooks enable Indian & global enterprises, GCCs, Startups, VC/PE firms, and Academic Institutions enhance business performance and accelerate decision making.
Visit www.aiqrate.ai to experience our AI advisory services & consulting offerings