Is your Enterprise AI Ready: Strategic considerations for the CXOs
Add Your Heading Text Here
At the enterprise level, AI assumes enormous power and potential , it can disrupt, innovate, enhance, and in many cases totally transform businesses . Multiple reports predicts a 300% increase in AI investment in 2020-2022 and estimates that the AI market amongst several exponential technologies will be the highest . There are solid instances that the AI investment can pay off—if CEO’s can adopt the right strategy. Organizations that deploy AI strategically enjoy advantages ranging from cost reductions and higher productivity to top-line benefits such as increasing revenue and profits, enhanced customer experiences, and working-capital optimization. Multiple surveys also shows that the companies winning at AI are also more likely to enjoy broader businesses.
So How to make your Enterprise AI Ready?
72 % of the organizations say they are getting significant impact from AI. But these enterprises have taken clear, practical steps to get the results they want. Here are five of their strategic orientation to embark on the process to make AI Enterprise Ready :
- Core AI A-team assimilation with diversified skill sets
- Evangelize AI amongst senior management
- Focus on process, not function
- Shift from system-of-record to system-of-intelligence apps, platforms
- Encourage innovation and transformation
Core AI A-team assimilation with diversified skill sets
Through 2022, organization using cognitive ergonomics and system design in new AI projects will achieve long term success four times more often than others
With massive investments in AI startups in 2021 alone, and the exponential efficiencies created by AI, this evolution will happen quicker than many business leaders are prepared for. If you aren’t sure where to start, don’t worry – you’re not alone. The good news is that you still have options:
- You can acquire, or invest in a company applying AI/ML in your market, and gain access to new product and AI/ML talent.
- You can seek to invest as a limited partner in a few early stage AI focused VC firms, gaining immediate access and exposure to vetted early stage innovation, a community of experts and market trends.
- You can set out to build an AI-focused division to optimize your internal processes using AI, and map out how AI can be integrated into your future products. But recruiting in the space is painful and you will need a strong vision and sense of purpose to attract and retain the best.
Process Based Focus Rather than Function Based
One critical element differentiates AI success from AI failure: strategy. AI cannot be implemented piecemeal. It must be part of the organization’s overall business plan, along with aligned resources, structures, and processes. How a company prepares its corporate culture for this transformation is vital to its long-term success. That includes preparing talent by having senior management that understands the benefits of AI; fostering the right skills, talent, and training; managing change; and creating an environment with processes that welcome innovation before, during, and after the transition.
The challenge of AI isn’t just the automation of processes—it’s about the up-front process design and governance you put in to manage the automated enterprise. The ability to trace the reasoning path AI use to make decisions is important. This visibility is crucial in banking & financial services, where auditors and regulators require firms to understand the source of a machine’s decision.
Evangelize AI amongst senior management
One of the biggest challenges to enterprise transformation is resistance to change. Surveys have found that senior management is the inertia led to AI implementation. C-suite executives may not have warmed up to it either. There is such a lack of understanding about the benefits which AI can bring that the C-suite or board members simply don’t want to invest in it, nor do they understand that failing to do so will adversely affect their top & bottom line and even cause them to go out of business. Regulatory uncertainty about AI, rough experiences with previous technological innovation, and a defensive posture to better protect shareholders, not stakeholders, may be contributing factors.
Pursuing AI without senior management support is difficult. Here the numbers again speak for themselves. The majority of leading AI companies (68%) strongly agree that their senior management understands the benefits AI offers. By contrast, only 7% of laggard firms agree with this view. Curiously, though, the leading group still cites the lack of senior management vision as one of the top two barriers to the adoption of AI.
The Dawn of System-of-Intelligence Apps & Platforms
Analysts report predicts that an Intelligence stack will gain rapid adoption in enterprises as IT departments shift from system-of-record to system-of-intelligence apps, platforms, and priorities. The future of enterprise software is being defined by increasingly intelligent applications today, and this will accelerate in the future.
By 2022, AI platform services will cannibalize revenues for 30% of market leading companies
It will be commonplace for enterprise apps to have machine learning algorithms that can provide predictive insights across a broad base of scenarios encompassing a company’s entire value chain. The potential exists for enterprise apps to change selling and buying behaviour, tailoring specific responses based on real-time data to optimize discounting, pricing, proposal and quoting decisions.
The Process of Supporting Innovation
Besides developing capabilities among employees, an organization’s culture and processes must also support new approaches and technologies. Innovation waves take a lot longer because of the human element. You can’t just put posters on the walls and say, ‘Hey, we have become an AI-enabled company, so let’s change the culture.’ The way it works is to identify and drive visible examples of adoption. Algorithmic trading, image recognition/tagging, and patient data processing are predicted to the top AI uses cases by 2025. It is forecasted that predictive maintenance and content distribution on social media will be the fourth and fifth highest revenue producing AI uses cases over the next eight years.
In the End, it’s about Transforming Enterprise
AI is part of a much bigger process of re-engineering enterprises. That is the major difference between the automation attempts of yesteryear and today’s AI: AI is completely integrated into the fabric of business, allowing private and public-sector organizations to transform themselves and society in profound ways. Enterprises that will deploy AI at full scale will reap tangible benefits at both strategic & operational levels.
Related Posts
AIQRATIONS
Reimagine Business Strategy & Operating Models with AI : The CXO’s Playbook
Add Your Heading Text Here
AlphaGo caused a stir by defeating 18-time world champion Lee Sedol in Go, a game thought to be impenetrable by AI for another 10 years. AlphaGo’s success is emblematic of a broader trend: An explosion of data and advances in algorithms have made technology smarter than ever before. Machines can now carry out tasks ranging from recommending movies to diagnosing cancer — independently of, and in many cases better than, humans. In addition to executing well-defined tasks, technology is starting to address broader, more ambiguous problems. It’s not implausible to imagine that one day a “strategist in a box” could autonomously develop and execute a business strategy. I have spoken to several CXOs and leaders who express such a vision — and they would like to embed AI in the business strategy and their operating models
Business Processes – Increasing productivity by reducing disruptions
AI algorithms are not natively “intelligent.” They learn inductively by analyzing data. Most leaders are investing in AI talent and have built robust information infrastructures, Airbus started to ramp up production of its new A350 aircraft, the company faced a multibillion-euro challenge. The plan was to increase the production rate of that aircraft faster than ever before. To do that, they needed to address issues like responding quickly to disruptions in the factory. Because they will happen. Airbus turned to AI , It combined data from past production programs, continuing input from the A350 program, fuzzy matching, and a self-learning algorithm to identify patterns in production problems.AI led to rectification of about 70% of the production disruptions for Airbus, by matching to solutions used previously — in near real time.
Just as it is enabling speed and efficiency at Airbus, AI capabilities are leading directly to new, better processes and results at other pioneering organizations. Other large companies, such as BP, Wells Fargo, and Ping , an Insurance, are already solving important business problems with AI. Many others, however, have yet to get started.
Integrated Strategy Machine – The Implementation Scope of AI @ scale
The integrated strategy machine is the AI analogy of what new factory designs were for electricity. In other words, the increasing intelligence of machines could be wasted unless businesses reshape the way they develop and execute their strategies. No matter how advanced technology is, it needs human partners to enhance competitive advantage. It must be embedded in what we call the integrated strategy machine. An integrated strategy machine is the collection of resources, both technological and human, that act in concert to develop and execute business strategies. It comprises a range of conceptual and analytical operations, including problem definition, signal processing, pattern recognition, abstraction and conceptualization, analysis, and prediction. One of its critical functions is reframing, which is repeatedly redefining the problem to enable deeper insights.
Amazon represents the state-of-the-art in deploying an integrated strategy machine. It has at least 21 AI systems, which include several supply chain optimization systems, an inventory forecasting system, a sales forecasting system, a profit optimization system, a recommendation engine, and many others. These systems are closely intertwined with each other and with human strategists to create an integrated, well-oiled machine. If the sales forecasting system detects that the popularity of an item is increasing, it starts a cascade of changes throughout the system: The inventory forecast is updated, causing the supply chain system to optimize inventory across its warehouses; the recommendation engine pushes the item more, causing sales forecasts to increase; the profit optimization system adjusts pricing, again updating the sales forecast.
Manufacturing Operations – An AI assistant on the floor
CXOs at industrial companies expect the largest effect in operations and manufacturing. BP plc, for example, augments human skills with AI in order to improve operations in the field. They have something called the BP well advisor that takes all of the data that’s coming off of the drilling systems and creates advice for the engineers to adjust their drilling parameters to remain in the optimum zone and alerts them to potential operational upsets and risks down the road. They are also trying to automate root-cause failure analysis to where the system trains itself over time and it has the intelligence to rapidly assess and move from description to prediction to prescription.
Customer-facing activities – Near real time scoring
Ping An Insurance Co. of China Ltd., the second-largest insurer in China, with a market capitalization of $120 billion, is improving customer service across its insurance and financial services portfolio with AI. For example, it now offers an online loan in three minutes, thanks in part to a customer scoring tool that uses an internally developed AI-based face-recognition capability that is more accurate than humans. The tool has verified more than 300 million faces in various uses and now complements Ping An’s cognitive AI capabilities including voice and imaging recognition.
AI for Different Operational Strategy Models
To make the most of this technology implementation in various business operations in your enterprise, consider the three main ways that businesses can or will use AI:
- Insights enabled intelligence
Now widely available, improves what people and organizations are already doing. For example, Google’s Gmail sorts incoming email into “Primary,” “Social,” and “Promotion” default tabs. The algorithm, trained with data from millions of other users’ emails, makes people more efficient without changing the way they use email or altering the value it provides. Assisted intelligence tends to involve clearly defined, rules-based, repeatable tasks.
Insights based intelligence apps often involve computer models of complex realities that allow businesses to test decisions with less risk. For example, one auto manufacturer has developed a simulation of consumer behaviour, incorporating data about the types of trips people make, the ways those affect supply and demand for motor vehicles, and the variations in those patterns for different city topologies, marketing approaches, and vehicle price ranges. The model spells out more than 200,000 variations for the automaker to consider and simulates the potential success of any tested variation, thus assisting in the design of car launches. As the automaker introduces new cars and the simulator incorporates the data on outcomes from each launch, the model’s predictions will become ever more accurate.
2. Recommendation based Intelligence
Recommendation based Intelligence, emerging today, enables organizations and people to do things they couldn’t otherwise do. Unlike insights enabled intelligence, it fundamentally alters the nature of the task, and business models change accordingly.
Netflix uses machine learning algorithms to do something media has never done before: suggest choices customers would probably not have found themselves, based not just on the customer’s patterns of behaviour, but on those of the audience at large. A Netflix user, unlike a cable TV pay-per-view customer, can easily switch from one premium video to another without penalty, after just a few minutes. This gives consumers more control over their time. They use it to choose videos more tailored to the way they feel at any given moment. Every time that happens, the system records that observation and adjusts its recommendation list — and it enables Netflix to tailor its next round of videos to user preferences more accurately. This leads to reduced costs and higher profits per movie, and a more enthusiastic audience, which then enables more investments in personalization (and AI).
3. Decision enabled Intelligence
Being developed for the future, Decision enabled intelligence creates and deploys machines that act on their own. Very few intelligence systems — systems that make decisions without direct human involvement or oversight — are in widespread use today. Early examples include automated trading in the stock market (about 75 percent of Nasdaq trading is conducted autonomously) and facial recognition. In some circumstances, algorithms are better than people at identifying other people. Other early examples include robots that dispose of bombs, gather deep-sea data, maintain space stations, and perform other tasks inherently unsafe for people.
As you contemplate the deployment of artificial intelligence at scale , articulate what mix of the three approaches works best for you.
a) Are you primarily interested in upgrading your existing processes, reducing costs, and improving productivity? If so, then start with insights enabled intelligence with a clear AI strategy roadmap
b) Do you seek to build your business around something new — responsive and self-driven products, or services and experiences that incorporate AI? Then pursue an decision enabled intelligence approach, probably with more complex AI applications and robust infrastructure
c) Are you developing a genuinely new platform ? In that case, think of building first principles of AI led strategy across the functionalities and processes of the platform .
CXO’s need to create their own AI strategy playbook to reimagine their business strategies and operating models and derive accentuated business performance.
Related Posts
AIQRATIONS
“RE-ENGINEERING” BUSINESSES – THINK “AI” led STRATEGY
Add Your Heading Text Here
AI adoption across industries is galloping at a rapid pace and resulting benefits are increasing by the day, some businesses are challenged by the complexity and confusion that AI can generate. Enterprises can get stuck trying to analyse all that’s possible and all that they could do through Ai, when they should be taking that next step of recognizing what’s important and what they should be doing — for their customers, stakeholders, and employees. Discovering real business opportunities and achieving desired outcomes can be elusive. To overcome this, enterprises should pursue a constant attempt to re-engineer their AI strategy to generate insights & intelligence that leads to real outcomes
Re-engineering Data Architecture & Infrastructure
To successfully derive value from data immediately, there is a need for faster data analysis than is currently available using traditional data management technology. With the explosion of digital analytics, social media, and the “Internet of things” (IoT) there is an opportunity to radically re-engineer data architecture to provide organizations with a tiered approach to data collection, with real-time and historical data analyses. Infrastructure-as-a-service for AI is the combination of components that enables architecture that delivers the right business outcomes. Developing this architecture involves aspects of design of the cluster computing power, networking, and innovations in software that enable advanced technology services and interconnectivity. Infrastructure is the foundation for optimal processing and storage of data and is an important which is also the foundation for any data farm.
The new era of AI led infrastructure is virtualized (analytics) environments also can be referred to as the next Big “V” of big data. The virtualization infrastructure approach has several advantages, such as scalability, ease of maintenance, elasticity, cost savings due better utilization of resources, and the abstraction of the external layer from the internal implementation (back-end) of a service or resource. Containers are the trending technology making headlines recently, which is an approach to virtualization and cloud-enabled data centres. Fortune 500 companies have begun to “containerize” their servers, data centre and cloud applications with Docker. Containerization excludes all of the problems of virtualization by eliminating hypervisor and its VMs. Each application is deployed in its own container, which runs on the “bare metal” of the server plus a single, shared instance of the operating system.
AI led Business Process Re-Engineering
The BPR methodologies of the past have significantly contributed to the development of today’s enterprises. However, today’s business landscape has become increasingly complex and fast-paced. The regulatory environment is also constantly changing. Consumers have become more sophisticated and have easy access to information, on-the-go. Staying competitive in the present business environment requires organizations to go beyond process efficiencies, incremental improvements and enhancing transactional flow. Now, organizations need to have a comprehensive understanding of its business model through an objective and realistic grasp of its business processes. This entails having organization-wide insights that show the interdependence of various internal functions while taking into consideration regulatory requirements and shifting consumer tastes.
Data is the basis on which fact-based analysis is performed to obtain objective insights of the organization. In order to obtain organization-wide insights, management needs to employ AI capabilities on data that resides both inside and outside its organization. However, an organization’s AI capabilities are primarily dependent on the type, amount and quality of data it possesses.
The integration of an organization’s three key dimensions of people, process and technology is also critical during process design. The people are the individuals responsible and accountable for the organization’s processes. The process is the chain of activities required to keep the organization running. The technology is the suite of tools that support, monitor and ensure consistency in the application of the process. The integration of all these, through the support of a clear governance structure, is critical in sustaining a fact-based driven organizational culture and the effective capture, movement and analysis of data. Designing processes would then be most effective if it is based on data-driven insights and when AI capabilities are embedded into the re-engineered processes. Data-driven insights are essential in gaining a concrete understanding of the current business environment and utilizing these insights is critical in designing business processes that are flexible, agile and dynamic.
Re-engineering Customer Experience (CX) – The new paradigm
It’s always of great interest to me to see new trends emerge in our space. One such trend gaining momentum is enterprise looking at solving customer needs & expectations with what I’d describe as re-engineering customer experience . Just like everything else in our industry, changes in consumer behaviour caused by mobile and social trends are disrupting the CX space. Just a few years ago, web analytics solutions gave brands the best view into performance of their digital business and user behaviours. Fast-forward to today, and this is often not the case. With the growth in volume and importance of new devices, digital channels and touch points, CX solutions are now just one of the many digital data silos that brands need to deal with and integrate into the full digital picture. While some vendors may now offer ways for their solutions to run in different channels and on a range of devices, these capabilities are often still a work in progress. Many enterprises today find their CX solution as another critical set of insights that must be downloaded daily into a omni-channel AI data store and then run visualization to provide cross-channel business reporting.
Re-shaping Talent Acquisition and Engagement with AI
AI s is causing disruption in virtually every function but talent acquisition t is one of the more recent to get a business refresh. A new data driven approach to talent management is reshaping the way organizations find and hire staff, while the power of talent analytics is also changing how HR tackles employee retention and engagement. The implications for anyone hoping to land a job, and for businesses that have traditionally relied on personal relationships are extreme, but robots and algorithms will not yet completely replace human interaction.AI will certainly help to identify talent in specific searches. rather than relying on a rigorous interview process and resume, employers are able to “mine” through deep reserves of information, including from your online footprint. The real value will be in identifying personality types, abilities, and other strengths to help create well-rounded teams. Also, companies are also using people analytics to understand the stress levels of their employees to ensure long-term productiveness and wellness.
The Final Word
Based on my experiences with clients across enterprises , GCCs ,start-ups ; alignment among the three key dimensions of talent, process and AI led technology within a robust governance structure are critical to effectively utilize AI and remain competitive in the current business environment. AI is able to open doors to growth & scalability through insights & intelligence resulting in the identification of industry white spaces. It enhances operational efficiency through process improvements based on relevant and fact-based data. It is able to enrich human capital through workforce analysis resulting in more effective human capital management. It is able to mitigate risks by identifying areas of regulatory and company policy non-compliance before actual damage is done. AI led re-engineering approach unleashes the potential of an organization by putting the facts and the reality into the hands of the decision makers.
(AIQRATE, A bespoke global AI advisory and consulting firm. A first in its genre, AIQRATE provides strategic AI advisory services and consulting offerings across multiple business segments to enable clients on their AI powered transformation & innovation journey and accentuate their decision making and business performance.
AIQRATE works closely with Boards, CXOs and Senior leaders advising them on navigating their Analytics to AI journey with the art of possible or making them jumpstart to AI rhythm with AI@scale approach followed by consulting them on embedding AI as core to business strategy within business functions and augmenting the decision-making process with AI. We have proven bespoke AI advisory services to enable CXO’s and Senior Leaders to curate & design building blocks of AI strategy, embed AI@scale interventions and create AI powered organizations.
AIQRATE’s path breaking 50+ AI consulting frameworks, assessments, primers, toolkits and playbooks enable Indian & global enterprises, GCCs, Startups, SMBs, VC/PE firms, and Academic Institutions enhance business performance and accelerate decision making.
AIQRATE also consults with Consulting firms , Technology service providers , Pure play AI firms , Technology behemoths & Platform enterprises on curating differentiated & bespoke AI capabilities & offerings , market development scenarios & GTM approaches
Visit www.aiqrate.ai to experience our AI advisory services & consulting offerings)
Related Posts
AIQRATIONS
Data Driven Enterprise – Part II: Building an operative data ecosystems strategy
Add Your Heading Text Here
Ecosystems—interconnected sets of services in a single integrated experience—have emerged across a range of industries, from financial services to retail to healthcare. Ecosystems are not limited to a single sector; indeed, many transcend multiple sectors. For traditional incumbents, ecosystems can provide a golden opportunity to increase their influence and fend off potential disruption by faster-moving digital attackers. For example, banks are at risk of losing half of their margins to fintechs, but they have the opportunity to increase margins by a similar amount by orchestrating an ecosystem.
In my experience, many ecosystems focus on the provision of data: exchange, availability, and analysis. Incumbents seeking to excel in these areas must develop the proper data strategy, business model, and architecture.
What is a data ecosystem?
Simply put, a data ecosystem is a platform that combines data from numerous providers and builds value through the usage of processed data. A successful ecosystem balances two priorities:
Building economies of scale by attracting participants through lower barriers to entry. In addition, the ecosystem must generate clear customer benefits and dependencies beyond the core product to establish high exit barriers over the long term.Cultivating a collaboration network that motivates a large number of parties with similar interests (such as app developers) to join forces and pursue similar objectives. One of the key benefits of the ecosystem comes from the participation of multiple categories of players (such as app developers and app users).
What are the data-ecosystem archetypes?
As data ecosystems have evolved, five archetypes have emerged. They vary based on the model for data aggregation, the types of services offered, and the engagement methods of other participants in the ecosystem.
- Data utilities. By aggregating data sets, data utilities provide value-adding tools and services to other businesses. The category includes credit bureaus, consumer-insights firms, and insurance-claim platforms.
- Operations optimization and efficiency centers of excellence. This archetype vertically integrates data within the business and the wider value chain to achieve operational efficiencies. An example is an ecosystem that integrates data from entities across a supply chain to offer greater transparency and management capabilities.
- End-to-end cross-sectorial platforms. By integrating multiple partner activities and data, this archetype provides an end-to-end service to the customers or business through a single platform. Car reselling, testing platforms, and partnership networks with a shared loyalty program exemplify this archetype.
- Marketplace platforms. These platforms offer products and services as a conduit between suppliers and consumers or businesses. Amazon and Alibaba are leading examples.
- B2B infrastructure (platform as a business). This archetype builds a core infrastructure and tech platform on which other companies establish their ecosystem business. Examples of such businesses are data-management platforms and payment-infrastructure providers.
The ingredients for a successful data ecosystem : Data ecosystems have the potential to generate significant value. However, the entry barriers to establishing an ecosystem are typically high, so companies must understand the landscape and potential obstacles. Typically, the hardest pieces to figure out are finding the best business model to generate revenues for the orchestrator and ensuring participation.
If the market already has a large, established player, companies may find it difficult to stake out a position. To choose the right partners, executives need to pinpoint the value they can offer and then select collaborators who complement and support their strategic ambitions. Similarly, companies should look to create a unique value proposition and excellent customer experience to attract both end customers and other collaborators. Working with third parties often requires additional resources, such as negotiating teams supported by legal specialists to negotiate and structure the collaboration with potential partners. Ideally, partnerships should be mutually beneficial arrangements between the ecosystem leader and other participants.
As companies look to enable data pooling and the benefits it can generate, they must be aware of laws regarding competition. Companies that agree to share access to data, technology, and collection methods restrict access for other companies, which could raise anti-competition concerns. Executives must also ensure that they address privacy concerns, which can differ by geography.
Other capabilities and resources are needed to create and build an ecosystem. For example, to find and recruit specialists and tech talent, organizations must create career opportunities and a welcoming environment. Significant investments will also be needed to cover the costs of data-migration projects and ecosystem maintenance.
Ensuring ecosystem participants have access to data
Once a company selects its data-ecosystem archetype, executives should then focus on setting up the right infrastructure to supports its operation. An ecosystem can’t deliver on its promise to participants without ensuring access to data, and that critical element relies on the design of the data architecture. We have identified five questions that incumbents must resolve when setting up their data ecosystem.
How do we exchange data among partners in the ecosystem?
Industry experience shows that standard data-exchange mechanisms among partners, such as cookie handshakes, for example, can be effective. The data exchange typically follows three steps: establishing a secure connection, exchanging data through browsers and clients, and storing results centrally when necessary.
How do we manage identity and access?
Companies can pursue two strategies to select and implement an identity-management system. The more common approach is to centralize identity management through solutions such as Okta, OpenID, or Ping. An emerging approach is to decentralize and federate identity management—for example, by using blockchain ledger mechanisms.
How can we define data domains and storage?
Traditionally, an ecosystem orchestrator would centralize data within each domain. More recent trends in data-asset management favor an open data-mesh architecture . Data mesh challenges conventional centralization of data ownership within one party by using existing definitions and domain assets within each party based on each use case or product. Certain use cases may still require centralized domain definitions with central storage. In addition, global data-governance standards must be defined to ensure interoperability of data assets.
How do we manage access to non-local data assets, and how can we possibly consolidate?
Most use cases can be implemented with periodic data loads through application programming interfaces (APIs). This approach results in a majority of use cases having decentralized data storage. Pursuing this environment requires two enablers: a central API catalog that defines all APIs available to ensure consistency of approach, and strong group governance for data sharing.
How do we scale the ecosystem, given its heterogeneous and loosely coupled nature?
Enabling rapid and decentralized access to data or data outputs is the key to scaling the ecosystem. This objective can be achieved by having robust governance to ensure that all participants of the ecosystem do the following:
- Make their data assets discoverable, addressable, versioned, and trustworthy in terms of accuracy
- Use self-describing semantics and open standards for data exchange
- Support secure exchanges while allowing access at a granular level
The success of a data-ecosystem strategy depends on data availability and digitization, API readiness to enable integration, data privacy and compliance—for example, General Data Protection Regulation (GDPR)—and user access in a distributed setup. This range of attributes requires companies to design their data architecture to check all these boxes.
As incumbents consider establishing data ecosystems, we recommend they develop a road map that specifically addresses the common challenges. They should then look to define their architecture to ensure that the benefits to participants and themselves come to fruition. The good news is that the data-architecture requirements for ecosystems are not complex. The priority components are identity and access management, a minimum set of tools to manage data and analytics, and central data storage.Truly mentioning , Developing an operative data ecosystem strategy is far more difficult than getting the tech requirements right.
Related Posts
AIQRATIONS
Data Driven Enterprise – Part I: Building an effective Data Strategy for competitive edge
Add Your Heading Text Here
Few Enterprises take full advantage of data generated outside their walls. A well-structured data strategy for using external data can provide a competitive edge. Many enterprises have made great strides in collecting and utilizing data from their own activities. So far, though, comparatively few have realized the full potential of linking internal data with data provided by third parties, vendors, or public data sources. Overlooking such external data is a missed opportunity. Organizations that stay abreast of the expanding external-data ecosystem and successfully integrate a broad spectrum of external data into their operations can outperform other companies by unlocking improvements in growth, productivity, and risk management.
The COVID-19 crisis provides an example of just how relevant external data can be. In a few short months, consumer purchasing habits, activities, and digital behavior changed dramatically, making preexisting consumer research, forecasts, and predictive models obsolete. Moreover, as organizations scrambled to understand these changing patterns, they discovered little of use in their internal data. Meanwhile, a wealth of external data could—and still can—help organizations plan and respond at a granular level. Although external-data sources offer immense potential, they also present several practical challenges. To start, simply gaining a basic understanding of what’s available requires considerable effort, given that the external-data environment is fragmented and expanding quickly. Thousands of data products can be obtained through a multitude of channels—including data brokers, data aggregators, and analytics platforms—and the number grows every day. Analyzing the quality and economic value of data products also can be difficult. Moreover, efficient usage and operationalization of external data may require updates to the organization’s existing data environment, including changes to systems and infrastructure. Companies also need to remain cognizant of privacy concerns and consumer scrutiny when they use some types of external data.
These challenges are considerable but surmountable. This blog series discusses the benefits of tapping external-data sources, illustrated through a variety of examples, and lays out best practices for getting started. These include establishing an external-data strategy team and developing relationships with data brokers and marketplace partners. Company leaders, such as the executive sponsor of a data effort and a chief data and analytics officer, and their data-focused teams should also learn how to rigorously evaluate and test external data before using and operationalizing the data at scale.
External-data success stories: Companies across industries have begun successfully using external data from a variety of sources . The investment community is a pioneer in this space. To predict outcomes and generate investment returns, analysts and data scientists in investment firms have gathered “alternative data” from a variety of licensed and public data sources, many of which draw from the “digital exhaust” of a growing number of technology companies and the public web. Investment firms have established teams that assess hundreds of these data sources and providers and then test their effectiveness in investment decisions.
A broad range of data sources are used, and these inform investment decisions in a variety of ways:
- Investors actively gather job postings, company reviews posted by employees, employee-turnover data from professional networking and career websites, and patent filings to understand company strategy and predict financial performance and organizational growth.
- Analysts use aggregated transaction data from card processors and digital-receipt data to understand the volume of purchases by consumers, both online and offline, and to identify which products are increasing in share. This gives them a better understanding of whether traffic is declining or growing, as well as insights into cross-shopping behaviors.
- Investors study app downloads and digital activity to understand how consumer preferences are changing and how effective an organization’s digital strategy is relative to that of its peers. For instance, app downloads, activity, and rating data can provide a window into the success rates of the myriad of live-streaming exercise offerings that have become available over the last year.
Corporations have also started to explore how they can derive more value from external data . For example, a large insurer transformed its core processes, including underwriting, by expanding its use of external-data sources from a handful to more than 40 in the span of two years. The effort involved was considerable; it required prioritization from senior leadership, dedicated resources, and a systematic approach to testing and applying new data sources. The hard work paid off, increasing the predictive power of core models by more than 20 percent and dramatically reducing application complexity by allowing the insurer to eliminate many of the questions it typically included on customer applications.
Three steps to creating value with external data:
Use of external data has the potential to be game changing across a variety of business functions and sectors. The journey toward successfully using external data has three key steps.
1. Establish a dedicated team for external-data sourcing
To get started, organizations should establish a dedicated data-sourcing team. Per our understanding at AIQRATE , a key role on this team is a dedicated data scout or strategist who partners with the data-analytics team and business functions to identify operational, cost, and growth improvements that could be powered by external data. This person also would be responsible for building excitement around what can be made possible through the use of external data, planning the use cases to focus on, identifying and prioritizing data sources for investigation, and measuring the value generated through use of external data. Ideal candidates for this role are individuals who have served as analytics translators and who have experience in deploying analytics use cases and in working with technology, business, and analytics profiles.
The other team members, who should be drawn from across functions, would include purchasing experts, data engineers, data scientists and analysts, technology experts, and data-review-board members . These team members typically spend only part of their time supporting the data-sourcing effort. For example, the data analysts and data scientists may already be supporting data cleaning and modeling for a specific use case and help the sourcing work stream by applying the external data to assess its value. The purchasing expert, already well versed in managing contracts, will build specialization on data-specific licensing approaches to support those efforts.
Throughout the process of finding and using external data, companies must keep in mind privacy concerns and consumer scrutiny, making data-review roles essential peripheral team members. Data reviewers, who typically include legal, risk, and business leaders, should thoroughly vet new consumer data sets—for example, financial transactions, employment data, and cell-phone data indicating when and where people have entered retail locations. The vetting process should ensure that all data were collected with appropriate permissions and will be used in a way that abides by relevant data-privacy laws and passes muster with consumer.This team will need a budget to procure small exploratory data sets, establish relationships with data marketplaces (such as by purchasing trial licenses), and pay for technology requirements (such as expanded data storage).
2. Develop relationships with data marketplaces and aggregators
While online searches may appear to be an easy way for data-sourcing teams to find individual data sets, that approach is not necessarily the most effective. It generally leads to a series of time-consuming vendor-by-vendor discussions and negotiations. The process of developing relationships with a vendor, procuring sample data, and negotiating trial agreements often takes months. A more effective strategy involves using data-marketplace and -aggregation platforms that specialize in building relationships with hundreds of data sources, often in specific data domains—for example, consumer, real-estate, government, or company data. These relationships can give organizations ready access to the broader data ecosystem through an intuitive search-oriented platform, allowing organizations to rapidly test dozens or even hundreds of data sets under the auspices of a single contract and negotiation. Since these external-data distributors have already profiled many data sources, they can be valuable thought partners and can often save an external-data team significant time. When needed, these data distributors can also help identify valuable data products and act as the broker to procure the data.
Once the team has identified a potential data set, the team’s data engineers should work directly with business stakeholders and data scientists to evaluate the data and determine the degree to which the data will improve business outcomes. To do so, data teams establish evaluation criteria, assessing data across a variety of factors to determine whether the data set has the necessary characteristics for delivering valuable insights . Data assessments should include an examination of quality indicators, such as fill rates, coverage, bias, and profiling metrics, within the context of the use case. For example, a transaction data provider may claim to have hundreds of millions of transactions that help illuminate consumer trends. However, if the data include only transactions made by millennial consumers, the data set will not be useful to a company seeking to understand broader, generation-agnostic consumer trends.
3. Prepare the data architecture for new external-data streams
Generating a positive return on investment from external data calls for up-front planning, a flexible data architecture, and ongoing quality-assurance testing.Up-front planning starts with an assessment of the existing data environment to determine how it can support ingestion, storage, integration, governance, and use of the data. The assessment covers issues such as how frequently the data come in, the amount of data, how data must be secured, and how external data will be integrated with internal data. This will provide insights about any necessary modifications to the data architecture.
Modifications should be designed to ensure that the data architecture is flexible enough to support the integration of a continuous “conveyor belt” of incoming data from a variety of data sources—for example, by enabling application-programming-interface (API) calls from external sources along with entity-resolution capabilities to intelligently link the external data to internal data. In other cases, it may require tooling to support large-scale data ingestion, querying, and analysis. Data architecture and underlying systems can be updated over time as needs mature and evolve.The final process in this step is ensuring an appropriate and consistent level of quality by constantly monitoring the data used. This involves examining data regularly against the established quality framework to identify whether the source data have changed and to understand the drivers of any changes (for example, schema updates, expansion of data products, change in underlying data sources). If the changes are significant, algorithmic models leveraging the data may need to be retrained or even rebuilt.
Minimizing risk and creating value with external data will require a unique mix of creative problem solving, organizational capability building, and laser-focused execution. That said, business leaders who demonstrate the achievements possible with external data can capture the imagination of the broader leadership team and build excitement for scaling beyond early pilots and tests. An effective route is to begin with a small team that is focused on using external data to solve a well-defined problem and then use that success to generate momentum for expanding external-data efforts across the organization.
Related Posts
AIQRATIONS
CXO Insights: Establishing AI fluency with Boards – The new strategic imperative
Add Your Heading Text Here
Though a rhetorical theme , We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI — a discussion that’s relevant whether organizations are developing AI systems or buying AI-powered software. With AI adoption in increasingly widespread use, it’s time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization’s overall mission and risk management.
According to a recent global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations “comprehensively identify and prioritize” the risks associated with AI deployment. Board members recognize that this task is on their agendas: According to the 2019 National Association of Corporate Directors (NACD) Blue Ribbon Commission report, “Fit for the Future: An Urgent Imperative for Board Leadership,” 86% of board members “fully expect to deepen their engagement with management on new drivers of growth and risk in the next five years.”
Why’s this an imperative ? Because AI’s potential to deliver significant benefits comes with new and complex risks. For example, the frequency with which AI-driven facial recognition technologies misidentify nonwhite or female faces is among the issues that have driven a pullback by major vendors — which are also concerned about the use of the technology for mass surveillance and consequent civil rights violations. Recently, IBM stopped selling the facial technology altogether. Further, Microsoft said it would not sell its facial recognition technology to police departments until Congress passes a federal law regulating its use by law enforcement. Similarly, Amazon said it would not allow police use of its technology for a year, to allow time for legislators to act.
The use of AI-driven facial recognition technology in policing is just one notorious example, however. Virtually all AI systems & platforms in use today may be vulnerable to problems that result from the nature of the data used to train and operate them, the assumptions made in the algorithms themselves, the lack of system controls, and the lack of diversity in the human teams that build, instruct, and deploy them.Many of the decisions that will determine how these technologies work, and what their impact will be, take place largely outside of the board’s view — despite the strategic, operational, and legal risks they present. Nonetheless, boards are charged with overseeing and supporting management in better managing AI risks.
Increasing the board’s fluency with and visibility into these issues is just good governance. A board, its committees, and individual directors can approach this as a matter of strict compliance, strategic planning, or traditional legal and business risk oversight. They might also approach AI governance through the lens of environment, social, and governance (ESG) considerations: As the board considers enterprise activity that will affect society, AI looms large. The ESG community is increasingly making the case that AI needs to be added to the board’s portfolio.
How Boards can assess the quality & impact of AI
Directors’ duties of care and loyalty are familiar and well established. They include the obligations to act in good faith, be sufficiently informed, and exercise due care in oversight over strategy, risk, and compliance.
Boards assessing the quality and impact of AI and oversight is required should understand the following:
- AI is more than an issue for the technology team. Its impact resonates across the organization and implicates those managing legal, marketing, and human resources functions, among others.
- AI is not a siloed thing. It is a system comprising the technology itself, the human teams who manage and interact with it, and the data upon which it runs.
- AI systems need the accountability of C-level strategy and oversight. They are highly complex and contextual and cannot be trustworthy without integrated, strategic guidance and management.
- AI is not static. It is designed to adapt quickly and thus requires continuous oversight.
- The AI systems most in use by business today are efficient and powerful prediction engines. They generate these predictions based on data sets that are selected by engineers, who use them to train and feed algorithms that are, in turn, optimized on goals articulated — most often — by those developers. Those individuals succeed when they build technology that works, on time and within budget. Today, the definition of effective design for AI may not necessarily include guardrails for its responsible use, and engineering groups typically aren’t resourced to take on those questions or to determine whether AI systems operate consistently with the law or corporate strategies and objectives.
The choices made by AI developers — or by an HR manager considering a third-party resume-screening algorithm, or by a marketing manager looking at an AI-driven dynamic pricing system — are significant. Some of these choices may be innocuous, but some are not, such as those that result in hard-to-detect errors or bias that can suppress diversity or that charge customers different prices based on gender. Board oversight must include requirements for policies at both the corporate level and the use-case level that delineate what AI systems will and will not be used for. It must also set standards by which their operation, safety, and robustness can be assessed. Those policies need to be backed up by practical processes, strong culture, and compliance structures.
Enterprises may be held accountable for whether their uses of algorithm-driven systems comply with well-established anti-discrimination rules. The U.S. Department of Housing and Urban Development recently charged Facebook with violations of the federal Fair Housing Act for its use of algorithms to determine housing-related ad-targeting strategies based on protected characteristics such as race, national origin, religion, familial status, sex, and disabilities. California courts have held that the Unruh Civil Rights Act of 1959 applies to online businesses’ discriminatory practices. The legal landscape also is adapting to the increasing sophistication of AI and its applications in a wide array of industries beyond the financial sector. For instance, the FTC is calling for the “transparent, explainable, fair, and empirically sound” use of AI tools and demanding accountability and standards. The Department of Justice’s Criminal Division’s updated guidance underscores that an adequate corporate compliance program is a factor in sentencing guidelines.
From the board’s perspective, compliance with existing rules is an obvious point, but it is also important to keep up with evolving community standards regarding the appropriate duty of care as these technologies become more prevalent and better understood. Further, even after rules are in force, applying them in particular business settings to solve specific business problems can be difficult and intricate. Boards need to confirm that management is sufficiently focused and resourced to manage compliance well, along with AI’s broader strategic trade-offs and risks.
Risks to brand and reputation. The issue of brand integrity — clearly a current board concern — may most likely drive AI accountability in the short term. Recent issues faced by individuals charged with advancing responsible AI within companies found that the “most prevalent incentives for action were catastrophic media attention and decreasing media tolerance for the status quo.” Well before new laws and regulations are in effect, company stakeholders such as customers, employees, and the public are forming opinions about how an organization uses AI. As these technologies penetrate further into business and the home, their impact will increasingly define a brand’s reputation for trust, quality, and its mission.
The role of AI in exacerbating racial, gender, and cultural inequities is inescapable. Addressing these issues within the technology is necessary, but it is not sufficient. Without question, we can move forward only with genuine commitments to diversity and inclusion at all levels of technology development and technology consumption.
Business continuity concerns. Boards and executives are already keenly aware that technology-dependent enterprises are vulnerable to disruption when systems fail or go wrong, and AI raises new board-worthy considerations on this score. First, many AI systems rely on numerous and unknown third-party technologies, which might threaten reliability if any element is faulty, orphaned, or inadequately supported. Second, AI carries the potential of new kinds of cyber threats, requiring new levels of coordination within any enterprise. And bear in mind that many AI developers will tell you that they don’t really know what an AI system will do until it does it — and that AI that “goes bad,” or cannot be trusted, will need remediation and may have to be pulled out of production or off the market.
The ”NEW” strategic imperative for Boards
Regardless of how a board decides to approach AI fluency, it will play a critical role in considering the impact of the AI technologies that a business chooses to use. Before specific laws are in effect, and even well after they are written, businesses will be making important decisions about how to use these tools, how they will impact their workforces, and when to rely upon them in lieu of human judgment. The hardest questions a board will face about proposed AI applications are likely to be “Should we adopt AI in this way?” and “What is our duty to understand how that function is consistent with all of our other beliefs, missions, and strategic objectives?” Boards must decide where they want management to draw the line: for example, to identify and reject an AI-generated recommendation that is illegal or at odds with organizational values .
Boards should do the following in order to establish adequate AI fluency mechanics:
- Learn where in the organization AI and other exponential technologies are being used or planning to be used, and why.
- Set a regular cadence for management to report on policies and processes for governing these technologies specifically, and for setting standards for AI procurement and deployment, training, compliance, and oversight.
- Encourage the appointment of a C-level executive to be responsible for this work, across company functions.
- Encourage adequate resourcing and training of the oversight function.
It’s not too soon for boards to begin this work; even for enterprises with little investment in AI development, it will find its way into the organization through AI-infused tools and services. The legal, strategic, and brand risks of AI are sufficiently grave that boards need facility with them and a process by which they can work with management to contain the risks while reaping the rewards.AI Fluency is the new strategic agenda.
Related Posts
AIQRATIONS
Managing Bias in AI: Strategic Risk Management Strategy for Banks
Add Your Heading Text Here
AI is set to transform the banking industry, using vast amounts of data to build models that improve decision making, tailor services, and improve risk management. According to the EIU, this could generate value of more than $250 billion in the banking industry. But there is a downside, since ML models amplify some elements of model risk. And although many banks, particularly those operating in jurisdictions with stringent regulatory requirements, have validation frameworks and practices in place to assess and mitigate the risks associated with traditional models, these are often insufficient to deal with the risks associated with machine-learning models. The added risk brought on by the complexity of algorithmic models can be mitigated by making well-targeted modifications to existing validation frameworks.
Conscious of the problem, many banks are proceeding cautiously, restricting the use of ML models to low-risk applications, such as digital marketing. Their caution is understandable given the potential financial, reputational, and regulatory risks. Banks could, for example, find themselves in violation of anti discrimination laws, and incur significant fines—a concern that pushed one bank to ban its HR department from using a machine-learning resume screener. A better approach, however, and ultimately the only sustainable one if banks are to reap the full benefits of machine-learning models, is to enhance model-risk management.
Regulators have not issued specific instructions on how to do this. In the United States, they have stipulated that banks are responsible for ensuring that risks associated with machine-learning models are appropriately managed, while stating that existing regulatory guidelines, such as the Federal Reserve’s “Guidance on Model Risk Management” (SR11-7), are broad enough to serve as a guide. Enhancing model-risk management to address the risks of machine-learning models will require policy decisions on what to include in a model inventory, as well as determining risk appetite, risk tiering, roles and responsibilities, and model life-cycle controls, not to mention the associated model-validation practices. The good news is that many banks will not need entirely new model-validation frameworks. Existing ones can be fitted for purpose with some well-targeted enhancements.
New Risk mitigation exercises for ML models
There is no shortage of news headlines revealing the unintended consequences of new machine-learning models. Algorithms that created a negative feedback loop were blamed for the “flash crash” of the British pound by 6 percent in 2016, for example, and it was reported that a self-driving car tragically failed to properly identify a pedestrian walking her bicycle across the street. The cause of the risks that materialized in these machine-learning models is the same as the cause of the amplified risks that exist in all machine-learning models, whatever the industry and application: increased model complexity. Machine-learning models typically act on vastly larger data sets, including unstructured data such as natural language, images, and speech. The algorithms are typically far more complex than their statistical counterparts and often require design decisions to be made before the training process begins. And machine-learning models are built using new software packages and computing infrastructure that require more specialized skills. The response to such complexity does not have to be overly complex, however. If properly understood, the risks associated with machine-learning models can be managed within banks’ existing model-validation frameworks
Here are the strategic approaches for enterprises to ensure that that the specific risks associated with machine learning are addressed :
Demystification of “Black Boxes” : Machine-learning models have a reputation of being “black boxes.” Depending on the model’s architecture, the results it generates can be hard to understand or explain. One bank worked for months on a machine-learning product-recommendation engine designed to help relationship managers cross-sell. But because the managers could not explain the rationale behind the model’s recommendations, they disregarded them. They did not trust the model, which in this situation meant wasted effort and perhaps wasted opportunity. In other situations, acting upon (rather than ignoring) a model’s less-than-transparent recommendations could have serious adverse consequences.
The degree of demystification required is a policy decision for banks to make based on their risk appetite. They may choose to hold all machine-learning models to the same high standard of interpretability or to differentiate according to the model’s risk. In USA, models that determine whether to grant credit to applicants are covered by fair-lending laws. The models therefore must be able to produce clear reason codes for a refusal. On the other hand, banks might well decide that a machine-learning model’s recommendations to place a product advertisement on the mobile app of a given customer poses so little risk to the bank that understanding the model’s reasons for doing so is not important. Validators need also to ensure that models comply with the chosen policy. Fortunately, despite the black-box reputation of machine-learning models, significant progress has been made in recent years to help ensure their results are interpretable. A range of approaches can be used, based on the model class:
Linear and monotonic models (for example, linear-regression models): linear coefficients help reveal the dependence of a result on the output. Nonlinear and monotonic models, (for example, gradient-boosting models with monotonic constraint): restricting inputs so they have either a rising or falling relationship globally with the dependent variable simplifies the attribution of inputs to a prediction. Nonlinear and nonmonotonic (for example, unconstrained deep-learning models): methodologies such as local interpretable model-agnostic explanations or Shapley values help ensure local interpretability.
Bias : A model can be influenced by four main types of bias: sample, measurement, and algorithm bias, and bias against groups or classes of people. The latter two types, algorithmic bias and bias against people, can be amplified in machine-learning models. For example, the random-forest algorithm tends to favor inputs with more distinct values, a bias that elevates the risk of poor decisions. One bank developed a random-forest model to assess potential money-laundering activity and found that the model favored fields with a large number of categorical values, such as occupation, when fields with fewer categories, such as country, were better able to predict the risk of money laundering.
To address algorithmic bias, model-validation processes should be updated to ensure appropriate algorithms are selected in any given context. In some cases, such as random-forest feature selection, there are technical solutions. Another approach is to develop “challenger” models, using alternative algorithms to benchmark performance. To address bias against groups or classes of people, banks must first decide what constitutes fairness. Four definitions are commonly used, though which to choose may depend on the model’s use: Demographic blindness: decisions are made using a limited set of features that are highly uncorrelated with protected classes, that is, groups of people protected by laws or policies. Demographic parity: outcomes are proportionally equal for all protected classes. Equal opportunity: true-positive rates are equal for each protected class. Equal odds: true-positive and false-positive rates are equal for each protected class. Validators then need to ascertain whether developers have taken the necessary steps to ensure fairness. Models can be tested for fairness and, if necessary, corrected at each stage of the model-development process, from the design phase through to performance monitoring.
Feature engineering : is often much more complex in the development of machine-learning models than in traditional models. There are three reasons why. First, machine-learning models can incorporate a significantly larger number of inputs. Second, unstructured data sources such as natural language require feature engineering as a preprocessing step before the training process can begin. Third, increasing numbers of commercial machine-learning packages now offer so-called AutoML, which generates large numbers of complex features to test many transformations of the data. Models produced using these features run the risk of being unnecessarily complex, contributing to overfitting. For example, one institution built a model using an AutoML platform and found that specific sequences of letters in a product application were predictive of fraud. This was a completely spurious result caused by the algorithm’s maximizing the model’s out-of-sample performance.
In feature engineering, banks have to make a policy decision to mitigate risk. They have to determine the level of support required to establish the conceptual soundness of each feature. The policy may vary according to the model’s application. For example, a highly regulated credit-decision model might require that every individual feature in the model be assessed. For lower-risk models, banks might choose to review the feature-engineering process only: for example, the processes for data transformation and feature exclusion. Validators should then ensure that features and/or the feature-engineering process are consistent with the chosen policy. If each feature is to be tested, three considerations are generally needed: the mathematical transformation of model inputs, the decision criteria for feature selection, and the business rationale. For instance, a bank might decide that there is a good business case for using debt-to-income ratios as a feature in a credit model but not frequency of ATM usage, as this might penalize customers for using an advertised service.
Hyper parameters : Many of the parameters of machine-learning models, such as the depth of trees in a random-forest model or the number of layers in a deep neural network, must be defined before the training process can begin. In other words, their values are not derived from the available data. Rules of thumb, parameters used to solve other problems, or even trial and error are common substitutes. Decisions regarding these kinds of parameters, known as hyper parameters, are often more complex than analogous decisions in statistical modeling. Not surprisingly, a model’s performance and its stability can be sensitive to the hyper parameters selected. For example, banks are increasingly using binary classifiers such as support-vector machines in combination with natural-language processing to help identify potential conduct issues in complaints. The performance of these models and the ability to generalize can be very sensitive to the selected kernel function.Validators should ensure that hyper parameters are chosen as soundly as possible. For some quantitative inputs, as opposed to qualitative inputs, a search algorithm can be used to map the parameter space and identify optimal ranges. In other cases, the best approach to selecting hyperparameters is to combine expert judgment and, where possible, the latest industry practices.
Production readiness : Traditional models are often coded as rules in production systems. Machine-learning models, however, are algorithmic, and therefore require more computation. This requirement is commonly overlooked in the model-development process. Developers build complex predictive models only to discover that the bank’s production systems cannot support them. One US bank spent considerable resources building a deep learning–based model to predict transaction fraud, only to discover it did not meet required latency standards. Validators already assess a range of model risks associated with implementation. However, for machine learning, they will need to expand the scope of this assessment. They will need to estimate the volume of data that will flow through the model, assessing the production-system architecture (for example, graphics-processing units for deep learning), and the runtime required.
Dynamic model calibration : Some classes of machine-learning models modify their parameters dynamically to reflect emerging patterns in the data. This replaces the traditional approach of periodic manual review and model refresh. Examples include reinforcement-learning algorithms or Bayesian methods. The risk is that without sufficient controls, an overemphasis on short-term patterns in the data could harm the model’s performance over time. Banks therefore need to decide when to allow dynamic recalibration. They might conclude that with the right controls in place, it is suitable for some applications, such as algorithmic trading. For others, such as credit decisions, they might require clear proof that dynamic recalibration outperforms static models. With the policy set, validators can evaluate whether dynamic recalibration is appropriate given the intended use of the model, develop a monitoring plan, and ensure that appropriate controls are in place to identify and mitigate risks that might emerge. These might include thresholds that catch material shifts in a model’s health, such as out-of-sample performance measures, and guardrails such as exposure limits or other, predefined values that trigger a manual review.
Banks will need to proceed gradually. The first step is to make sure model inventories include all machine learning–based models in use. One bank’s model risk-management function was certain the organization was not yet using machine-learning models, until it discovered that its recently established innovation function had been busy developing machine-learning models for fraud and cyber security.
From here, validation policies and practices can be modified to address machine-learning-model risks, though initially for a restricted number of model classes. This helps build experience while testing and refining the new policies and practices. Considerable time will be needed to monitor a model’s performance and finely tune the new practices. But over time banks will be able to apply them to the full range of approved machine-learning models, helping companies mitigate risk and gain the confidence to start harnessing the full power of machine learning.
(AIQRATE, A bespoke global AI advisory and consulting firm. A first in its genre, AIQRATE provides strategic AI advisory services and consulting offerings across multiple business segments to enable clients on their AI powered transformation & innovation journey and accentuate their decision making and business performance.
AIQRATE works closely with Boards, CXOs and Senior leaders advising them on navigating their Analytics to AI journey with the art of possible or making them jump start to AI progression with AI@scale approach followed by consulting them on embedding AI as core to business strategy within business functions and augmenting the decision-making process with AI. We have proven bespoke AI advisory services to enable CXO’s and Senior Leaders to curate & design building blocks of AI strategy, embed AI@scale interventions and create AI powered organizations. AIQRATE’s path breaking 50+ AI consulting frameworks, assessments, primers, toolkits and playbooks enable Indian & global enterprises, GCCs, Startups, VC/PE firms, and Academic Institutions enhance business performance and accelerate decision making.
Visit www.aiqrate.ai to experience our AI advisory services & consulting offerings
Related Posts
AIQRATIONS
Emergence of AI Powered Enterprise: Strategic considerations for Leaders
Add Your Heading Text Here
The excitement around artificial intelligence is palpable. It seems that not a day goes by without one of the giants in the industry coming out with a breakthrough application of this technology, or a new nuance is added to the overall body of knowledge. Horizontal and industry-specific use cases of AI abound and there is always something exciting around the corner every single day.
However, with the keen interest from global leaders of multinational corporations, the conversation is shifting towards having a strategic agenda for AI in the enterprise. Business heads are less interested in topical experiments and minuscule productivity gains made in the short term. They are more keen to understand the impact of AI in their areas of work from a long-term standpoint. Perhaps the most important question that they want to see answered is – what will my new AI-enabled enterprise look like? The question is as strategic as it is pertinent. For business leaders, the most important issues are – improving shareholder returns and ensuring a productive workforce – as part of running a sustainable, future-ready business. Artificial intelligence may be the breakout technology of our time, but business leaders are more occupied with trying to understand just how this technology can usher in a new era of their business, how it is expected to upend existing business value chains, unlock new revenue streams, and deliver improved efficiencies in cost outlays. In this article, let us try to answer these questions.
AI is Disrupting Existing Value Chains
Ever since Michael Porter first expounded on the concept in his best-selling book, Competitive Advantage: Creating and Sustaining Superior Performance, the concept of the value chain has gained great currency in the minds of business leaders globally. The idea behind the value chain was to map out the inter linkages between the primary activities that work together to conceptualize and bring a product / service to market (R&D, manufacturing, supply chain, marketing, etc.), as well as the role played by support activities performed by other internal functions (finance, HR, IT etc.). Strategy leaders globally leverage the concept of value chains to improve business planning, identify new possibilities for improving business efficiency and exploit potential areas for new growth.
Now with AI entering the fray, we might see new vistas in the existing value chains of multinational corporations. For instance:
- Manufacturing is becoming heavily augmented by artificial intelligence and robotics. We are seeing these technologies getting a stronger foothold across processes requiring increasing sophistication. Business leaders need to now seriously consider workforce planning for a labor force that consists both human and artificial workers at their manufacturing units. Due attention should also be paid in ensuring that both coexist in a symbiotic and complementary manner.
- Logistics and Delivery are two other areas where we are seeing a steady growth in the use of artificial intelligence. Demand planning and fulfilment through AI has already reached a high level of sophistication at most retailers. Now Amazon – which handles some of the largest and most complex logistics networks in the world – is in advanced stages of bringing in unmanned aerial vehicles (drones) for deliveries through their Amazon Prime Air program. Business leaders expect outcomes to range from increased customer satisfaction (through faster deliveries) and reduction in costs for the delivery process.
- Marketing and Sales are constantly on the forefront for some of the most exciting inventions in AI. One of the most recent and evolved applications of AI is Reactful. A tool developed for eCommerce properties, Reactful helps drive better customer conversions by analyzing the clickstream and digital footprints of people who are on web properties and persuades them into making a purchase. Business leaders need to explore new ideas such as this that can help drive meaningful engagement and top line growth through these new AI-powered tools.
AI is Enabling New Revenue Streams
The second way business leaders are thinking strategically around AI is for its potential to unlock new sources of revenue. Earlier, functions such as internal IT were seen as a cost center. In today’s world, due to the cost and competitive pressure, areas of the business which were traditionally considered to be cost centers are require to reinvent themselves into revenue and profit centers. The expectation from AI is no different. There is a need to justify the investments made in this technology – and find a way for it to unlock new streams of revenue in traditional organizations. Here are two key ways in which business leaders can monetize AI:
- Indirect Monetization is one of the forms of leveraging AI to unlock new revenue streams. It involves embedding AI into traditional business processes with a focus on driving increased revenue. We hear of multiple companies from Amazon to Google that use AI-powered recommendation engines to drive incremental revenue through intelligent recommendations and smarter bundling. The action item for business leaders is to engage stakeholders across the enterprise to identify areas where AI can be deeply ingrained within tech properties to drive incremental revenue.
- Direct Monetization involves directly adding AI as a feature to existing offerings. Examples abound in this area – from Salesforce bringing in Einstein into their platform as an AI-centric service to cloud infrastructure providers such as Amazon and Microsoft adding AI capabilities into their cloud offerings. Business leaders should brainstorm about how AI augments their core value proposition and how it can be added into their existing product stack.
AI is Bringing Improved Efficiencies
The third critical intervention for a new AI-enabled enterprise is bringing to the fore a more cost-effective business. Numerous topical and early-stage experiments with AI have brought interesting success for reducing the total cost of doing business. Now is the time to create a strategic roadmap for these efficiency-led interventions and quantitatively measure their impact to business. Some food for thought for business leaders include:
- Supply Chain Optimization is an area that is ripe for AI-led disruption. With increasing varieties of products and categories and new virtual retailers arriving on the scene, there is a need for companies to reduce their outlay on the network that procures and delivers goods to consumers. One example of AI augmenting the supply chain function comes from Evertracker – a Hamburg-based startup. By leveraging IOT sensors and AI, they help their customers identify weaknesses such as delays and possible shortages early, basing their analysis on internal and external data. Business leaders should scout for solutions such as these that rely on data to identify possible tweaks in the supply chain network that can unlock savings for their enterprises.
- Human Resources is another area where AI-centric solutions can be extremely valuable to drive down the turnaround time for talent acquisition. One such solution is developed by Recualizer – which reduces the need for HR staff to scan through each job application individually. With this tool, talent acquisition teams need to first determine the framework conditions for a job on offer, while leaving the creation of assessment tasks to the artificial intelligence system. The system then communicates the evaluation results and recommends the most suitable candidates for further interview rounds. Business leaders should identify such game-changing solutions that can make their recruitment much more streamlined – especially if they receive a high number of applications.
- The Customer Experience arena also throws up very exciting AI use cases. We have now gone well beyond just bots answering frequently asked questions. Today, AI-enabled systems can also provide personalized guidance to customers that can help organizations level-up on their customer experience, while maintaining a lower cost of delivering that experience. Booking.com is a case in point. Their chatbot helps customers identify interesting activities and events that they can avail of at their travel destinations. Business leaders should explore such applications that provide the double advantage of improving customer experience, while maintaining strong bottom-line performance.
The possibilities for the new AI-enabled enterprises are as exciting as they are varied. The ideas shared are by no means exhaustive, but hopefully seed in interesting ideas for powering improved business performance. Strategy leaders and business heads need to consider how their AI-led businesses can help disrupt their existing value chains for the better, and unlock new ideas for improving bottom-line and top-line performance. This will usher in a new era of the enterprise, enabled by AI.
(AIQRATE, A bespoke global AI advisory and consulting firm. A first in its genre, AIQRATE provides strategic AI advisory services and consulting offerings across multiple business segments to enable clients on their AI powered transformation & innovation journey and accentuate their decision making and business performance.
AIQRATE works closely with Boards, CXOs and Senior leaders advising them on navigating their Analytics to AI journey with the art of possible or making them jump start to AI progression with AI@scale approach followed by consulting them on embedding AI as core to business strategy within business functions and augmenting the decision-making process with AI. We have proven bespoke AI advisory services to enable CXO’s and Senior Leaders to curate & design building blocks of AI strategy, embed AI@scale interventions and create AI powered organizations. AIQRATE’s path breaking 50+ AI consulting frameworks, assessments, primers, toolkits and playbooks enable Indian & global enterprises, GCCs, Startups, SMBs, VC/PE firms, and Academic Institutions enhance business performance and accelerate decision making.
Visit www.aiqrate.ai to experience our AI advisory services & consulting offerings )
Related Posts
AIQRATIONS
Personal Data Sharing & Protection: Strategic relevance from India’s context
Add Your Heading Text Here
India’s Investments in the digital financial infrastructure—known as “India Stack”—have sped up the large-scale digitization of people’s financial lives. As more and more people begin to conduct transactions online, questions have emerged about how to provide millions of customers adequate data protection and privacy while allowing their data to flow throughout the financial system. Data-sharing among financial services providers (FSPs) can enable providers to more efficiently offer a wider range of financial products better tailored to the needs of customers, including low-income customers.
However, it is important to ensure customers understand and consent to how their data are being used. India’s solution to this challenge is account aggregators (AAs). The Reserve Bank of India (RBI) created AAs in 2018 to simplify the consent process for customers. In most open banking regimes, financial information providers (FIPs) and financial information users (FIUs) directly exchange data. This direct model of data exchange—such as between a bank and a credit bureau—offers customers limited control and visibility into what data are being shared and to what end. AAs have been designed to sit between FIPs and FIUs to facilitate data exchange more transparently. Despite their name, AAs are barred from seeing, storing, analyzing, or using customer data. As trusted, impartial intermediaries, they simply manage consent and serve as the pipes through which data flow among FSPs. When a customer gives consent to a provider via the AA, the AA fetches the relevant information from the customer’s financial accounts and sends it via secure channels to the requesting institution. implementation of its policies for consensual data-sharing, including the establishment and operation of AAs. It provides a set of guiding design principles, outlines the technical format of data requests, and specifies the parameters governing the terms of use of requested data. It also specifies how to log consent and data flows.
There are several operational and coordination challenges across these three types of entities: FIPs, FIUs, and AAs. There are also questions around the data-sharing business model of AAs. Since AAs are additional players, they generate costs that must be offset by efficiency gains in the system to mitigate overall cost increases to customers. It remains an open question whether AAs will advance financial inclusion, how they will navigate issues around digital literacy and smartphone access, how the limits of a consent-based model of data protection and privacy play out, what capacity issues will be encountered among regulators and providers, and whether a competitive market of AAs will emerge given that regulations and interoperability arrangements largely define the business.
Account Aggregators (AA’s):
ACCOUNT AGGREGATORS (AAs) is one of the new categories of non banking financial companies (NBFCs) to figure into India Stack—India’s interconnected set of public and nonprofit infrastructure that supports financial services. India Stack has scaled considerably since its creation in 2009, marked by rapid digitization and parallel growth in mobile networks, reliable data connectivity, falling data costs, and continuously increasing smartphone use. Consequently, the creation, storage, use, and analyses of personal data have become increasingly relevant. Following an “open banking “approach, the Reserve Bank of India (RBI) licensed seven AAs in 2018 to address emerging questions around how data can be most effectively leveraged to benefit individuals while ensuring appropriate data protection and privacy, with consent being a key element in this. RBI created AAs to address the challenges posed by the proliferation of data by enabling data-sharing among financial institutions with customer consent. The intent is to provide a method through which customers can consent (or not) to a financial services provider accessing their personal data held by other entities. Providers are interested in these data, in part, because information shared by customers, such as bank statements, will allow providers to better understand customer risk profiles. The hypothesis is that consent-based data-sharing will help poorer customers qualify for a wider range of financial products—and receive financial products better tailored to their needs.
Data Sharing Model : The new perspective:
Paper based data collection is inconvenient , time consuming and costly for customers and providers. Where models for digital-sharing exist, they typically involve transferring data through intermediaries that are not always secure or through specialized agencies that offer little protection for customers. India’s consent-based data-sharing model provides a digital framework that enables individuals to give and withdraw consent on how and how much of their personal data are shared via secure and standardized channels. India’s guiding principles for sharing data with user consent—not only in the financial sector— are outlined in the National Data Sharing and Accessibility Policy (2012) and the Policy for Open Application Programming Interfaces for the Government of India. The Information Technology Act (2000) requires any entity that shares sensitive personal data to obtain consent from the user before the information is shared. The forthcoming Personal Data Protection Bill makes it illegal for institutions to share personal data without consent.
India’s Ministry of Electronics and Information Technology (MeitY) has issued an Electronic Consent Framework to define a comprehensive mechanism to implement policies for consensual data-sharing. It provides a set of guiding design principles, outlines the technical format of the data request, and specifies the parameters governing the terms of use of the data requested. It also specifies how to log both consent and data flows. This “consent artifact” was adopted by RBI, SEBI, IRDAI, and PFRDA. Components of the consent artifact structure include :
- Identifier : Specifies entities involved in the transaction: who is requesting the data, who is granting permission, who is providing the data, and who is recording consent.
- Data : Describes the type of data being accessed and the permissions for use of the data. Three types of permissions are available: view (read only), store, and query (request for specific data). The artifact structure also specifies the data that are being shared, date range for which they are being requested, duration of storage by the consumer, and frequency of access.
- Purpose : Describes end use, for example, to compute a loan offer.
- Log : Contains logs of who asked for consent, whether it was granted or not, and data flows.
- Digital signature : Identifies the digital signature and digital ID user certificate used by the provider to verify the digital signature. This allows providers to share information in encrypted form
The Approach :
THE AA consent based data sharing model mediates the flow of data between producers and users of data, ensuring that sharing data is subject to granular customer consent. AAs manage only the consent and data flow for the benefit of the consumer, mitigating the risk of an FIU pressuring consumers to consent to access to their data in exchange for a product or service. However, AAs, as entities that sit in the middle of this ecosystem, come with additional costs that will affect the viability of the business model and the cost of servicing consumers. FIUs most likely will urge consumers to go directly to an AA to receive fast, efficient, and low-cost services. However, AAs ultimately must market their services directly to the consumer. While AA services are not an easy sell, the rising levels of awareness among Indian consumers that their data are being sold without their consent or knowledge may give rise to the initial wave of adopters. While the AA model is promising, it remains to be seen how and when it will have a direct impact on the financial lives of consumers.
Differences between Personal Data Protection & GDPR ?
There are some major differences between the two.
First, the bill gives India’s central government the power to exempt any government agency from the bill’s requirements. This exemption can be given on grounds related to national security, national sovereignty, and public order.
While the GDPR offers EU member states similar escape clauses, they are tightly regulated by other EU directives. Without these safeguards, India’s bill potentially gives India’s central government the power to access individual data over and above existing Indian laws such as the Information Technology Act of 2000, which dealt with cyber crime and e-commerce.
Second, unlike the GDPR, India’s bill allows the government to order firms to share any of the non personal data they collect with the government. The bill says this is to improve the delivery of government services. But it does not explain how this data will be used, whether it will be shared with other private businesses, or whether any compensation will be paid for the use of this data.
Third, the GDPR does not require businesses to keep EU data within the EU. They can transfer it overseas, so long as they meet conditions such as standard contractual clauses on data protection, codes of conduct, or certification systems that are approved before the transfer.
The Indian bill allows the transfer of some personal data, but sensitive personal data can only be transferred outside India if it meets requirements that are similar to those of the GDPR. What’s more, this data can only be sent outside India to be processed; it cannot be stored outside India. This will create technical issues in delineating between categories of data that have to meet this requirement, and add to businesses’ compliance costs.
Related Posts
AIQRATIONS
AI Strategy: The Epiphany of Digital Transformation
Add Your Heading Text Here
In the past months due to lockdowns and WFH, enterprises have got an epiphany of massive shifts of business and strategic models for staying relevant and solvent. Digital transformation touted as the biggest strategic differentiation and competitive advantages for enterprises faced a quintessential inertia of mass adoption in the legacy based enterprises and remained more on business planning slides than in full implementation. However, Digital Transformation is not about aggregation of exponential technologies and adhoc use cases or stitching alliances with deep tech startups. The underpinning of Digital transformation is AI and how AI strategy has become the foundational aspect of accomplishing digital transformation for enterprises and generating tangible business metrics. But before we get to the significance of AI strategy in digital transformation, we need to understand the core of digital transformation itself. Because digital transformation will look different for every enterprise, it can be hard to pinpoint a definition that applies to all. However, in general terms: we define digital transformation as the integration of core areas of business resulting in fundamental changes to how businesses operate and how they deliver value to customers.
Though, in specific terms digital transformation can take a very interesting shape according to the business moment in question. From a customer’s point of view, “Digital transformation closes the gap between what digital customers already expect and what analog businesses actually deliver.”
Does Digital Transformation really mean bunching exponential technologies? I believe that digital transformation is first and foremost a business transformation. Digital mindset is not only about new age technology, but about curiosity, creativity, problem-solving, empathy, flexibility, decision-making and judgment, among others. Enterprises needs to foster this digital mindset, both within its own boundaries and across the company units. The World Economic Forum lists the top 10 skills needed for the fourth industrial revolution. None of them is totally technical. They are, rather, a combination of important soft skills relevant for the digital revolution. You don’t need to be a technical expert to understand how technology will impact your work. You need to know the foundational aspects, remain open-minded and work with technology mavens. Digital Transformation is more about cultural change that requires enterprises to continually challenge the status quo, experiment often, and get comfortable with failure. The most likely reason for business to undergo digital transformation is the survival & relevance issue. Businesses mostly don’t transform by choice because it is expensive and risky. Businesses go through transformation when they have failed to evolve. Hence its implementation calls for tough decisions like walking away from long-standing business processes that companies were built upon in favor of relatively new practices that are still being defined.
Business Implementation aspects of Digital Transformation
Disruption in digital business implies a more positive and evolving atmosphere, instead of the usual negative undertones that are attached to the word. According to the MIT Center for Digital Business, “Companies that have embraced digital transformation are 26 percent more profitable than their average industry competitors and enjoy a 12 percent higher market valuation.” A lot of startups and enterprises are adopting an evolutionary approach in transforming their business models itself, as part of the digital transformation. According to Mckinsey, One-third of the top 20 firms in industry segments will be disrupted by new competitors within five years.
The various Business Models being adopted in Digital Transformation era are:
- The Subscription Model (Netflix, Dollar Shave Club, Apple Music) Disrupts through “lock-in” by taking a product or service that is traditionally purchased on an ad hoc basis, and locking-in repeat custom by charging a subscription fee for continued access to the product/service
- The Freemium Model (Spotify, LinkedIn, Dropbox) Disrupts through digital sampling, where users pay for a basic service or product with their data or ‘eyeballs’, rather than money, and then charging to upgrade to the full offer. Works where marginal cost for extra units and distribution are lower than advertising revenue or the sale of personal data
- The Free Model (Google, Facebook) Disrupts with an ‘if-you’re-not-paying-for-the-product-you-are-the-product’ model that involves selling personal data or ‘advertising eyeballs’ harvested by offering consumers a ‘free’ product or service that captures their data/attention
- The Marketplace Model (eBay, iTunes, App Store, Uber, Airbnb) Disrupts with the provision of a digital marketplace that brings together buyers and sellers directly, in return for a transaction or placement fee or commission
- The Access-over-Ownership Model (Zipcar, Peer buy) Disrupts by providing temporary access to goods and services traditionally only available through purchase. Includes ‘Sharing Economy’ disruptors, which takes a commission from people monetizing their assets (home, car, capital) by lending them to ‘borrowers’
- The Hypermarket Model (Amazon, Apple) Disrupts by ‘brand bombing’ using sheer market power and scale to crush competition, often by selling below cost price
- The Experience Model (Tesla, Apple) Disrupts by providing a superior experience, for which people are prepared to pay
- The Pyramid Model (Amazon, Microsoft, Dropbox) Disrupts by recruiting an army of resellers and affiliates who are often paid on a commission-only mode
- The On-Demand Model (Uber, Operator, TaskRabbit) Disrupts by monetizing time and selling instant-access at a premium. Includes taking a commission from people with money but no time who pay for goods and services delivered or fulfilled by people with time but no money
- The Ecosystem Model (Apple, Google) Disrupts by selling an interlocking and interdependent suite of products and services that increase in value as more are purchased. Creates consumer dependency
Since Digital Transformation and its manifestation into various business models are being fast adopted by startups, there are providing tough competition to incumbent corporate houses and large enterprises. Though enterprises are also looking forward to digitally transform their enterprise business, the scale and complexity makes it difficult and resource consuming activity. It has imperatively invoked the enterprises to bring certain strategy to counter the cannibalizing effect in the following ways:
- The Block Strategy. Using all means available to inhibit the disruptor. These means can include claiming patent or copyright infringement, erecting regulatory hurdles, and using other legal barriers.
- The Milk Strategy. Extracting the most value possible from vulnerable businesses while preparing for the inevitable disruption
- The Invest in Disruption Model. Actively investing in the disruptive threat, including disruptive technologies, human capabilities, digitized processes, or perhaps acquiring companies with these attributes
- The Disrupt the Current Business Strategy. Launching a new product or service that competes directly with the disruptor, and leveraging inherent strengths such as size, market knowledge, brand, access to capital, and relationships to build the new business
- The Retreat into a Strategic Niche Strategy. Focusing on a profitable niche segment of the core market where disruption is less likely to occur (e.g. travel agents focusing on corporate travel, and complex itineraries, book sellers and publishers focusing on academia niche)
- The Redefine the Core Strategy. Building an entirely new business model, often in an adjacent industry where it is possible to leverage existing knowledge and capabilities (e.g. IBM to consulting, Fujifilm to cosmetics)
- The Exit Strategy. Exiting the business entirely and returning capital to investors, ideally through a sale of the business while value still exists (e.g. MySpace selling itself to Newscorp)
The curious evolution of AI and its relevance in digital transformation
So here’s an interesting question, AI has been around for more than 60 years, then why is it that it is only gaining traction with the advent of digital? The first practical application of such “machine intelligence” was introduced by Alan Turing, British mathematician and WWII code-breaker, in 1950. He even created the Turing test, which is still used today, as a benchmark to determine a machine’s ability to “think” like a human.The biggest differences between AI then and now are Hardware limitations, access to data, and rise of machine learning.
Hardware limitations led to the non-sustenance of AI adoption till late 1990s. There were many instances where the scope and opportunity of AI led transformation was identified and appreciated by implementation saw more difficult circumstances. The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. But Eventually it became obvious that they had grossly underestimated the difficulty of the project due to computer hardware limitations. The U.S. and British Governments stopped funding undirected research into artificial intelligence, leading to years known as an “AI winter”.
In another example, again in 1980, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again. Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to the presence of powerful computer hardware. Teaming this with the rise in digital, leading to an explosion of data and adoption of data generation in every aspect of business, made it highly convenient for AI to not only be adopted but to evolve to more accurate execution.
The Core of Digital Transformation: AI Strategy
According to McKinsey, by 2023, 85 percent of all digital transformation initiatives will be embedded with AI strategy at its core. Due to radical computational power, near-endless amounts of data, and unprecedented advances in ML algorithms, AI strategy will emerge as the most disruptive business scenario, and its manifestation into various trends that we see and will continue to see, shall drive the digital transformation as we understand it. The following will the future forward scenarios of AI strategy becoming core to digital transformation:
AI’s growing entrenchment: This time, the scale and scope of the surge in attention to AI is much larger than before. For starters, the infrastructure speed, availability, and sheer scale has enabled bolder algorithms to tackle more ambitious problems. Not only is the hardware faster, sometimes augmented by specialized arrays of processors (e.g., GPUs), it is also available in the shape of cloud services , data farms and centers
Geography, societal Impact: AI adoption is reaching institutions outside of the industry. Lawyers will start to grapple with how laws should deal with autonomous vehicles; economists will study AI-driven technological unemployment; sociologists will study the impact of AI-human relationships. This is the world of the future and the new next.
Artificial intelligence will be democratized: As per the results of a recent Forrester study , it was revealed that 58 percent of professionals researching artificial intelligence ,only 12 percent are actually using an AI system. Since AI requires specialized skills or infrastructure to implement, Companies like Facebook have realized this and are already doing all they can to simplify the implementation of AI and make it more accessible. Cloud platforms like Google APIs, Microsoft Azure, AWS are allowing developers to create intelligent apps without having to set up or maintain any other infrastructure.
Niche AI will Grow: By all accounts, 2020 & beyond won’t be for large, general-purpose AI systems. Instead, there will be an explosion of specific, highly niche artificial intelligence adoption cases. These include autonomous vehicles (cars and drones), robotics, bots (consumer-orientated such as Amazon Echo , and industry specific AI (think finance, health, security etc.).
Continued Discourse on AI ethics, security & privacy: Most AI systems are immensely complex sponges that absorb data and process it at tremendous rates. The risks related to AI ethics, security and privacy are real and need to be addressed through consideration and consensus. Sure, it’s unlikely that these problems will be solved in 2020, but as long as the conversation around these topics continues, we can expect at least some headway.
Algorithm Economy: With massive data generation using flywheels, there will be an economy created for algorithms, like a marketplace for algorithms. The engineers, data scientists, organizations, etc. will be sharing algorithms for using the data to extract required information set.
Where is AI Heading in the Digital Road?While much of this is still rudimentary at the moment, we can expect sophisticated AI to significantly impact our everyday lives. Here are four ways AI might affect us in the future:
Humanizing AI: AI will grow beyond a “tool” to fill the role of “co-worker.” Most AI software is too hidden technologically to significantly change the daily experience for the average worker. They exist only in a back end with little interface with humans. But several AI companies combine advanced AI with automation and intelligent interfaces that drastically alter the day to day workflow for workers
Design Thinking & behavioral science in AI: We will witness Divergence from More Powerful Intelligence To More Creative Intelligence. There have already been attempts to make AI engage in creative efforts, such as artwork and music composition. we’ll see more and more artificial intelligence designing artificial intelligence, resulting in many mistakes, plenty of dead ends, and some astonishing successes.
Rise of Cyborgs: As augmented AI is already the mainstream thinking; the future might hold witness to perfect culmination of man-machine augmentation. AI augmented to humans, intelligently handling operations which human cannot do, using neural commands.
AI Oracle : AI might become so connected with every aspect of our lives, processing though every quanta of data from every perspective that it would perfectly know how to raise the overall standard of living for the human race. People would religiously follow its instructions (like we already follow GPS navigations) leading to leading to an equation of dependence closer to devotion.
The Final Word
Digital business transformation is the ultimate challenge in change management. It impacts not only industry structures and strategic positioning, but it affects all levels of an organization (every task, activity, process) and even its extended supply chain. Hence to brace Digital led disruption, one has to embrace AI-led strategy. Organizations that deploy AI strategically will ultimately enjoy advantages ranging from cost reductions and higher productivity to top-line benefits such as increasing revenue and profits, richer customer experiences, and working-capital optimization.
( AIQRATE, A bespoke global AI advisory and consulting firm. A first in its genre, AIQRATE provides strategic AI advisory services and consulting offerings across multiple business segments to enable clients navigate their AI powered transformation, innovation & revival journey and accentuate their decision making and business performance.
AIQRATE works closely with Boards, CXOs and Senior leaders advising them on their Analytics to AI journey construct with the art of possible AI roadmap blended with a jump start approach to AI driven transformation with AI@scale centric strategy; AIQRATE also consults on embedding AI as core to business strategy within business processes & functions and augmenting the overall decision-making capabilities. Our bespoke AI advisory services focus on curating & designing building blocks of AI strategy, embed AI@scale interventions and create AI powered organizations.
AIQRATE’s path breaking 50+ AI consulting frameworks, methodologies, primers, toolkits and playbooks crafted by seasoned and proven AI strategy advisors enable Indian & global enterprises, GCCs, Startups, SMBs, VC/PE firms, and Academic Institutions enhance business performance & ROI and accelerate decision making capability. AIQRATE also provide advisory support to Technology companies, business consulting firms, GCCs, AI pure play outfits on curating discerning AI capabilities, solutions along with differentiated GTM and market development strategies.
Visit www.aiqrate.ai to experience our AI advisory services & consulting offerings. Follow us on Linkedin | Facebook | YouTube | Twitter | Instagram )