2nd Global AI Conclave 2021
Add Your Heading Text Here
On 10-02-2021 04:00 PM to 10-02-2021 08:30 PM IST
BML Munjal University would like to carry forward the discussion and intellectual interest generated during the inaugural Global AI Conclave last year to further understand AI’s possibilities in specific areas such as Healthcare, Manufacturing, Banking, Fintech etc. Things have been rather dynamic in these pandemic times and this conclave would like to capture our understanding within this context as well. The conclave is expected to be attended by a wide spectrum of people including business leaders, industry practitioners, policy makers/implementers and researchers/students/enthusiasts. An important element of the conclave would be the launch of a research report on the state of Artificial Intelligence within the healthcare sector which has been a major focus of attention in these testing times.
HEALTHCARE IN THE AI ERA
AI and related technologies provide tremendous promise to the healthcare sector across the entire value chain. There may also be certain downsides such as patient safety or data security. Artificial Intelligence is being used to emulate human cognition in the analysis, interpretation, and comprehension of complicated medical and healthcare data. While each AI technology can contribute significant value alone, the greater potential lies in the synergies generated by using them together across the entire patient journey, from diagnoses, to treatment, to ongoing health maintenance etc. In a country where the healthcare systems are evolving and have been tested to the hilt during the last one year, the 2 panel discussions in this segment are expected to pick up two important areas – (1) Diagnosis and (2) Ethics.
PANEL 1 Diagnosis Made Easier
It is expected that AI will access multiple sources of data to reveal patterns in disease and aid treatment and care. Diagnostics is focused on using AI and machine-learning to improve diagnostic accuracy and to cut costs. AI diagnostics have the potential to improve the delivery and effectiveness of health care. This panel would discuss this element from a multidimensional perspective. The panel is expected to consist of the following esteemed experts:
PANELLISTS
- Sanjay Dhawan – Group Director, ClearMedi Healthcare.
- Prof David Snead – Consultant pathologist at the University Hospitals Coventry & Warwickshire Prof. of Pathology Practice, University of Warwick
- Prashant Warier – Founder, Qure.AI
- MODERATOR – Nirupam Srivastava – VP Strategy, M&A and AI/Digital Transformation, Hero Corporate Services
PANEL 2 | What is ethical about it?
The AI code of ethics talks about the role of artificial intelligence as it relates to the continued development of humans. According to some, the purpose of AI should be to produce beneficial intelligence rather than undirected intelligence. Privacy and surveillance, prejudice and discrimination and most importantly, the role of human judgement do come into play when we speak about AI within the healthcare ecosystem. This very relevant and timely discussion will have the following esteemed experts:
PANELLISTS
- Yonah Welkar – AI innovator, Explorer, Mentor and Board Member in Education, Health, AI and Ethics
- Chhavi Chauhan – Director of Scientific Outreach, American Society for Investigative Pathology. Ethics Advisor at the Alliance for Artificial Intelligence in Healthcare and an AI Policy Expert at the AI Policy Exchange
- Balaji Vishwanathan – Founder, Invento Robotics
- Dipyaman Sanyal – Data Scientist. Educator. Real Estate Researcher. Behavioral Economist. Quant, Dono Consulting
- MODERATOR – Sameer Dhanrajani – CEO, AIQRATE Advisory & Consulting
PANEL 3 |AI in Business: The Evangelist View
Artificial Intelligence has moved into the mainstream of business, driven by advances in cloud computing, big data, open-source software, and improved algorithms. As AI technologies impact how we work, live, and manage businesses, organizational leaders, innovators, and investors are looking to harness the power of AI to create customer value and competitive advantage. This panel discussion would provide a window into how industries across the spectrum – banking, manufacturing, technology companies etc. are capitalizing on the AI opportunity. This panel would consist of the following esteemed experts:
PANELLISTS
- Sandeep Alur – Director, Microsoft Technology Center, Microsoft
- Ajit Jaokar – Course Director, Artificial Intelligence, Cloud and Edge Implementations, University of Oxford
- Utpal Chakraborty – Head of Artificial Intelligence at YES BANK, Chief Data Scientist, AI Researcher, a TEDx Speaker and Agile Lean Practitioner.
- Pinak Dattaray – Associate Partner, McKinsey
- Dinis Guarda – CEO, board member and digital and crypto economics driver and evangelist. Openbusinesscouncil.org, Ztudium, techabc, fashion abc.
- MODERATOR – David Siegel – Founder, Cutting through the Noise
Related Posts
AIQRATIONS
Cloud Platforms: Strategic Enabler for AI led Transformation
Add Your Heading Text Here
CIOs & CTOs have been toying with the idea of cloud adoption at scale for more than a decade since the first corporate experiments with external cloud platforms were conceptualized, and the verdict is long in on their business value. Companies that adopt the cloud well bring new capabilities to market more quickly, innovate more easily, and scale more efficiently—while also reducing technology risk.
Unfortunately, the verdict is still out on what constitutes a successful cloud implementation to actually capture that value. Most CIOs and CTOs default to traditional implementation models that may have been successful in the past but that make it almost impossible to capture the real value from the cloud. Defining the cloud opportunity too narrowly with siloed business initiatives, such as next-generation application hosting or data platforms, almost guarantees failure. That’s because no design consideration is given to how the organization will need to operate holistically in cloud, increasing the risk of disruption from nimbler attackers with modern technology platforms that enable business agility and innovation.
Companies that reap value from cloud platforms treat their adoption as a business- AI led transformation by doing three things:
- Focusing investments on business domains where cloud can enable increased revenues and improved margins
2. Selecting a technology and sourcing model that aligns with business strategy and risk constraints
3. Developing and implementing an operating model that is oriented around the cloud
CIOs and CTOs need to drive cloud adoption, but, given the scale and scope of change required to exploit this opportunity fully, they also need support and air cover from the rest of the management team.
Using cloud to enable AI led transformation : Only 14 percent of companies launching AI transformations have seen sustained and material performance improvements. Why? Technology execution capabilities are often not up to the task. Outdated AI technology environments make change expensive. Quarterly release cycles make it hard to tune AI capabilities to changing market demands. Rigid and brittle infrastructures choke on the data required for sophisticated analytics.
Operating in the cloud can reduce or eliminate many of these issues. Exploiting cloud services and tooling, however, requires change across all of IT and many business functions as well—in effect, a different business-technology model.
AI led transformation success requires CIOs and tech leaders to do three things :
1. Focus cloud investments in business domains where cloud platforms can enable increased revenues and improved margins:
The vast majority of the value the cloud generates comes from increased agility, innovation, and resilience provided to the business with sustained velocity. In most cases, this requires focusing cloud adoption on embedding re usability and composability so investment in modernizing can be rapidly scaled across the rest of the organization. This approach can also help focus programs on where the benefits matter most, rather than scrutinizing individual applications for potential cost savings
Faster time to market: Cloud-native companies can release code into production hundreds or thousands of times per day using end-to-end automation. Even traditional enterprises have found that automated cloud platforms allow them to release new capabilities daily, enabling them to respond to market demands and quickly test what does and doesn’t work. As a result, companies that have adopted cloud platforms report that they can bring new capabilities to market about 20 to 40 percent faster.
Ability to create innovative business offerings: Each of the major cloud service providers offers hundreds of native services and marketplaces that provide access to third-party ecosystems with thousands more. These services rapidly evolve and grow and provide not only basic infrastructure capabilities but also advanced functionality such as facial recognition, natural-language processing, quantum computing, and data aggregation.
Reduced risk: Cloud clearly disrupts existing security practices and architectures but also provides a rare opportunity to eliminate vast operational overhead to those that can design their platforms to consume cloud securely. Taking advantage of the multi billion-dollar investments CSPs have made in security operations requires a cyber-first design that automatically embeds robust standardized authentication, hardened infrastructure, and a resilient interconnected data-center availability zone.
Efficient scalability: Cloud enables companies to automatically add capacity to meet surge demand (in response to increasing customer usage, for example) and to scale out new services in seconds rather than the weeks it can take to procure additional on-premises servers. This capability has been particularly crucial during the COVID-19 pandemic, when the massive shift to digital channels created sudden and unprecedented demand peaks.
2. Select a technology, sourcing, and migration model that aligns with business and risk constraints
Decisions about cloud architecture and sourcing carry significant risk and cost implications—to the tune of hundreds of millions of dollars for large companies. The wrong technology and sourcing decisions will raise concerns about compliance, execution success, cyber security, and vendor risk—more than one large company has stopped its cloud program cold because of multiple types of risk. The right technology and source decisions not only mesh with the company’s risk appetite but can also “bend the curve” on cloud-adoption costs, generating support and excitement for the program across the management team.
If CIOs or CTOs make those decisions based on the narrow criteria of IT alone, they can create significant issues for the business. Instead, they must develop a clear picture of the business strategy as it relates to technology cost, investment, and risk.
3. Change operating models to capture cloud value
Capturing the value of migrating to the cloud requires changing both how IT works and how IT works with the business. The best CIOs and CTOs follow a number of principles in building a cloud-ready operating model:
Make everything a product : To optimize application functionality and mitigate technical debt,CIOs need to shift from “IT projects” to “products”—the technology-enabled offerings used by customers and employees. Most products will provide business capabilities such as order capture or billing. Automated as-a-service platforms will provide underlying technology services such as data management or web hosting. This approach focuses teams on delivering a finished working product rather than isolated elements of the product. This more integrated approach requires stable funding and a “product owner” to manage it.
Integrate with business. Achieving the speed and agility that cloud promises requires frequent interaction with business leaders to make a series of quick decisions. Practically, business leaders need to appoint knowledgeable decision makers as product owners for business-oriented products. These are people who have the knowledge and authority to make decisions about how to sequence business functionality as well as the understanding of the journeys of their “customers.”
Drive cloud skill sets across development teams. Traditional centers of excellence charged with defining configurations for cloud across the entire enterprise quickly get overwhelmed. Instead, top CIOs invest in delivery designs that embed mandatory self-service and co-creation approaches using abstracted, unified ways of working that are socialized using advanced training programs (such as “train the trainer”) to embed cloud knowledge in each agile tribe and even squad.
How Technology Leaders can join forces with leadership to drive AI led transformation
Given the economic and organizational complexity required to get the greatest benefits from the cloud, heads of infrastructure, CIOs, and CTOs need to engage with the rest of the leadership team. That engagement is especially important in the following areas:
Technology funding. Technology funding mechanisms frustrate cloud adoption—they prioritize features that the business wants now rather than critical infrastructure investments that will allow companies to add functionality more quickly and easily in the future. Each new bit of tactical business functionality built without best-practice cloud architectures adds to your technical debt—and thus to the complexity of building and implementing anything in the future. CIOs and CTOs need support from the rest of the management team to put in place stable funding models that will provide resources required to build underlying capabilities and remediate applications to run efficiently, effectively, and safely in the cloud.
Business-technology collaboration. Getting value from cloud platforms requires knowledgeable product owners with the power to make decisions about functionality and sequencing. That won’t happen unless the CEO and relevant business-unit heads mandate people in their organizations to be product owners and provide them with decision-making authority.
Engineering talent. Adopting the cloud requires specialized and sometimes hard-to-find technical talent—full-stack developers, data engineers, cloud-security engineers, identity and access-management specialists, cloud engineers, and site-reliability engineers. Unfortunately, some policies put in place a decade ago to contain IT costs can get in the way of on boarding cloud talent. Companies have adopted policies that limit costs per head and the number of senior hires, for example, which require the use of outsourced resources in low-cost locations. Collectively, these policies produce the reverse of what the cloud requires, which is a relatively small number of highly talented and expensive people who may not want to live in traditionally low-cost IT locations. CIOs and CTOs need changes in hiring and location policies to recruit and retain the talent needed for success in the cloud.
The recent COVID-19 pandemic has only heightened the need for companies to adopt AI led business models. Only cloud platforms can provide the required agility, scalability, and innovative capabilities required for this transition. While there have been frustrations and false starts in the enterprise cloud journey, companies can dramatically accelerate their progress by focusing cloud investments where they will provide the most business value and building cloud-ready operating models.
Related Posts
AIQRATIONS
Best Practices to Accelerate & Transform Analytics Adoption in the Cloud
Add Your Heading Text Here
Reimagining analytics in the cloud enables enterprises to achieve greater agility, increase scalability and optimize costs. But organizations take different paths to achieving their goals. The best way to proceed will depend on data environment and business objectives. There are two best practices to maximize analytics adoption in the cloud:
• Cloud Data Warehouse, Data Lake, and Lakehouse Transformation: Strategically moving data warehouse and data lake to the cloud over time and adopting a modern, end-to-end data infrastructure for AI, and machine learning projects.
• New Cloud Data Warehouse and Data Lake: Start small and fast and grow as needed by spinning up a new cloud data warehouse or cloud data lake. The same guidance applies whether implementing new data warehouses and data lakes in the cloud for the first time, or doing so for an individual department or line of business.
As cloud adoption grows, most organizations will eventually want to modernize their enterprise analytics infrastructure entirely in the cloud. With the transformation pathway, rebuild everything to take advantage of the most modern cloud-based enterprise data warehouse, data lake, and lake house technology to end up in the strongest position long term. But migrate data and workloads from existing on-premises enterprise data warehouse and data lake to the cloud incrementally, over time. This approach allows enterprises to be strategic while minimizing disruption. Enterprises can take the time to carefully evaluate data and bring over only what is needed, which makes this a less risky approach. It also enables more complex analysis of data, using artificial intelligence, machine learning. The combination of a cloud data warehouse and data lake allows to manage the data necessary for analytics by providing economical scalability across compute and storage that is not possible with an on-premises infrastructure. And it enables to incorporate new types of data, from IoT sensors, social media, text, and more, into your analysis to gain new insights.
For this pathway ,enterprises need an intelligent, automated data platform that delivers a number of critical capabilities. It should handle new data sources, accommodate AI and machine learning projects, support new processing engines, deliver performance at a massive scale, and offer serverless scale up/scale down capabilities. As with a brand-new cloud data warehouse or data lake, enterprises need cloud-native, best-of-breed data integration, data quality, and metadata management to ensure maximizing the value of cloud analytics. Once the data is in the cloud, organization can provide users with self-service access to this data so they can more easily and seamlessly create reports or take swift decision. Subsequently , this transformation pathway gives organizations an end-to-end modern infrastructure for next-generation cloud analytics
Lines of business increasingly rely on analytics to improve processes and business impact. For example, sales and marketing no longer ask, “How many leads did we generate?” They want to know how many sales-ready leads we gathered from Global 500 accounts as evidenced by user time spent consuming content on the web. But individual lines of business may not have the time or resources to create and maintain an on-premises data warehouse to answer these questions. With a new cloud data warehouse and data lake, departments can get analytics projects off the ground quickly and cost effectively. Departments simply spin up their own cloud data warehouses, populate them with data, and make sure they’re connected to analytics and BI tools. For data science projects, a team may want to quickly add a cloud data lake. In some cases, this approach enables the team to respond to requests for sophisticated analysis faster than centralized teams can normally handle. Whatever the purpose of new cloud data warehouse and data lake, enterprises need intelligent, automated cloud data management with best of-breed, cloud-native data integration, data quality, and metadata management all built on a cloud-native platform in order to deliver value and drive ROI. And note that while this approach allows enterprises to start small and scale as needed, the downside is that data warehouse and data lake may only benefit a particular department inside the enterprise.
Some organizations with significant investments in on-premises enterprise data warehouses and data lakes are looking to simply replicate their existing systems to the cloud. By lifting and shifting their data warehouse or data lake “as is” to the cloud, they seek to improve flexibility, increase scalability, and lower data center costs while migrating quickly to minimize disruption. Lifting and shifting an on-premises system to the cloud may seem fast and safe. But in reality, it’s an inefficient approach, one that’s like throwing everything you own into a moving van instead of packing strategically for a plane trip. In the long run, reducing baggage and traveling by air delivers greater agility and faster results because you are not weighed down by unnecessary clutter. Some organizations may need to do a lift and shift, but most will find it’s not the best course of action because it simply persists outdated or inefficient legacy systems and offers little in the way of innovation.
Related Posts
AIQRATIONS
CXO Insights: Establishing AI fluency with Boards – The new strategic imperative
Add Your Heading Text Here
Though a rhetorical theme , We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI — a discussion that’s relevant whether organizations are developing AI systems or buying AI-powered software. With AI adoption in increasingly widespread use, it’s time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization’s overall mission and risk management.
According to a recent global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations “comprehensively identify and prioritize” the risks associated with AI deployment. Board members recognize that this task is on their agendas: According to the 2019 National Association of Corporate Directors (NACD) Blue Ribbon Commission report, “Fit for the Future: An Urgent Imperative for Board Leadership,” 86% of board members “fully expect to deepen their engagement with management on new drivers of growth and risk in the next five years.”
Why’s this an imperative ? Because AI’s potential to deliver significant benefits comes with new and complex risks. For example, the frequency with which AI-driven facial recognition technologies misidentify nonwhite or female faces is among the issues that have driven a pullback by major vendors — which are also concerned about the use of the technology for mass surveillance and consequent civil rights violations. Recently, IBM stopped selling the facial technology altogether. Further, Microsoft said it would not sell its facial recognition technology to police departments until Congress passes a federal law regulating its use by law enforcement. Similarly, Amazon said it would not allow police use of its technology for a year, to allow time for legislators to act.
The use of AI-driven facial recognition technology in policing is just one notorious example, however. Virtually all AI systems & platforms in use today may be vulnerable to problems that result from the nature of the data used to train and operate them, the assumptions made in the algorithms themselves, the lack of system controls, and the lack of diversity in the human teams that build, instruct, and deploy them.Many of the decisions that will determine how these technologies work, and what their impact will be, take place largely outside of the board’s view — despite the strategic, operational, and legal risks they present. Nonetheless, boards are charged with overseeing and supporting management in better managing AI risks.
Increasing the board’s fluency with and visibility into these issues is just good governance. A board, its committees, and individual directors can approach this as a matter of strict compliance, strategic planning, or traditional legal and business risk oversight. They might also approach AI governance through the lens of environment, social, and governance (ESG) considerations: As the board considers enterprise activity that will affect society, AI looms large. The ESG community is increasingly making the case that AI needs to be added to the board’s portfolio.
How Boards can assess the quality & impact of AI
Directors’ duties of care and loyalty are familiar and well established. They include the obligations to act in good faith, be sufficiently informed, and exercise due care in oversight over strategy, risk, and compliance.
Boards assessing the quality and impact of AI and oversight is required should understand the following:
- AI is more than an issue for the technology team. Its impact resonates across the organization and implicates those managing legal, marketing, and human resources functions, among others.
- AI is not a siloed thing. It is a system comprising the technology itself, the human teams who manage and interact with it, and the data upon which it runs.
- AI systems need the accountability of C-level strategy and oversight. They are highly complex and contextual and cannot be trustworthy without integrated, strategic guidance and management.
- AI is not static. It is designed to adapt quickly and thus requires continuous oversight.
- The AI systems most in use by business today are efficient and powerful prediction engines. They generate these predictions based on data sets that are selected by engineers, who use them to train and feed algorithms that are, in turn, optimized on goals articulated — most often — by those developers. Those individuals succeed when they build technology that works, on time and within budget. Today, the definition of effective design for AI may not necessarily include guardrails for its responsible use, and engineering groups typically aren’t resourced to take on those questions or to determine whether AI systems operate consistently with the law or corporate strategies and objectives.
The choices made by AI developers — or by an HR manager considering a third-party resume-screening algorithm, or by a marketing manager looking at an AI-driven dynamic pricing system — are significant. Some of these choices may be innocuous, but some are not, such as those that result in hard-to-detect errors or bias that can suppress diversity or that charge customers different prices based on gender. Board oversight must include requirements for policies at both the corporate level and the use-case level that delineate what AI systems will and will not be used for. It must also set standards by which their operation, safety, and robustness can be assessed. Those policies need to be backed up by practical processes, strong culture, and compliance structures.
Enterprises may be held accountable for whether their uses of algorithm-driven systems comply with well-established anti-discrimination rules. The U.S. Department of Housing and Urban Development recently charged Facebook with violations of the federal Fair Housing Act for its use of algorithms to determine housing-related ad-targeting strategies based on protected characteristics such as race, national origin, religion, familial status, sex, and disabilities. California courts have held that the Unruh Civil Rights Act of 1959 applies to online businesses’ discriminatory practices. The legal landscape also is adapting to the increasing sophistication of AI and its applications in a wide array of industries beyond the financial sector. For instance, the FTC is calling for the “transparent, explainable, fair, and empirically sound” use of AI tools and demanding accountability and standards. The Department of Justice’s Criminal Division’s updated guidance underscores that an adequate corporate compliance program is a factor in sentencing guidelines.
From the board’s perspective, compliance with existing rules is an obvious point, but it is also important to keep up with evolving community standards regarding the appropriate duty of care as these technologies become more prevalent and better understood. Further, even after rules are in force, applying them in particular business settings to solve specific business problems can be difficult and intricate. Boards need to confirm that management is sufficiently focused and resourced to manage compliance well, along with AI’s broader strategic trade-offs and risks.
Risks to brand and reputation. The issue of brand integrity — clearly a current board concern — may most likely drive AI accountability in the short term. Recent issues faced by individuals charged with advancing responsible AI within companies found that the “most prevalent incentives for action were catastrophic media attention and decreasing media tolerance for the status quo.” Well before new laws and regulations are in effect, company stakeholders such as customers, employees, and the public are forming opinions about how an organization uses AI. As these technologies penetrate further into business and the home, their impact will increasingly define a brand’s reputation for trust, quality, and its mission.
The role of AI in exacerbating racial, gender, and cultural inequities is inescapable. Addressing these issues within the technology is necessary, but it is not sufficient. Without question, we can move forward only with genuine commitments to diversity and inclusion at all levels of technology development and technology consumption.
Business continuity concerns. Boards and executives are already keenly aware that technology-dependent enterprises are vulnerable to disruption when systems fail or go wrong, and AI raises new board-worthy considerations on this score. First, many AI systems rely on numerous and unknown third-party technologies, which might threaten reliability if any element is faulty, orphaned, or inadequately supported. Second, AI carries the potential of new kinds of cyber threats, requiring new levels of coordination within any enterprise. And bear in mind that many AI developers will tell you that they don’t really know what an AI system will do until it does it — and that AI that “goes bad,” or cannot be trusted, will need remediation and may have to be pulled out of production or off the market.
The ”NEW” strategic imperative for Boards
Regardless of how a board decides to approach AI fluency, it will play a critical role in considering the impact of the AI technologies that a business chooses to use. Before specific laws are in effect, and even well after they are written, businesses will be making important decisions about how to use these tools, how they will impact their workforces, and when to rely upon them in lieu of human judgment. The hardest questions a board will face about proposed AI applications are likely to be “Should we adopt AI in this way?” and “What is our duty to understand how that function is consistent with all of our other beliefs, missions, and strategic objectives?” Boards must decide where they want management to draw the line: for example, to identify and reject an AI-generated recommendation that is illegal or at odds with organizational values .
Boards should do the following in order to establish adequate AI fluency mechanics:
- Learn where in the organization AI and other exponential technologies are being used or planning to be used, and why.
- Set a regular cadence for management to report on policies and processes for governing these technologies specifically, and for setting standards for AI procurement and deployment, training, compliance, and oversight.
- Encourage the appointment of a C-level executive to be responsible for this work, across company functions.
- Encourage adequate resourcing and training of the oversight function.
It’s not too soon for boards to begin this work; even for enterprises with little investment in AI development, it will find its way into the organization through AI-infused tools and services. The legal, strategic, and brand risks of AI are sufficiently grave that boards need facility with them and a process by which they can work with management to contain the risks while reaping the rewards.AI Fluency is the new strategic agenda.
Related Posts
AIQRATIONS
Panel Discussion at IET on Ethical AI: Implementation, Challenges & Frameworks
Add Your Heading Text Here
Panel Discussion on Ethical AI: Implementations, Challenges & Frameworks organised by The Institution of Engineering & Technology.
November 6th, 2020 | 2:00pm – 3:30pm IST
How can AI systems work within the frameworks of human ethics? Join the experts as they demystify the need and regulatory landscape required to embed morals and ethics in AI.
Register here: https://buff.ly/3egUl4p
Speakers:
- Moderator: Sameer Dhanrajani, CEO & Co-founder, AIQRATE Advisory & Consulting
- Dr. Rohini Srivathsa, CTO, Microsoft India
- Avik Sarkar, Ex Head – Data Analytics Cell, Niti Aayog
- Nitin Sareen, Consumer Analytics Lead, Wells Fargo
Related Posts
AIQRATIONS
Sameer Dhanrajani in IET Future Tech Panel
Add Your Heading Text Here
The Institution of Engineering & Technology (IET) has included Sameer Dhanrajani in to its Future Tech Panel with core focus on Artificial Intelligence.
Read more at https://twitter.com/ietindia/status/1310869874998382592?s=24
Related Posts
AIQRATIONS
Managing Bias in AI: Strategic Risk Management Strategy for Banks
Add Your Heading Text Here
AI is set to transform the banking industry, using vast amounts of data to build models that improve decision making, tailor services, and improve risk management. According to the EIU, this could generate value of more than $250 billion in the banking industry. But there is a downside, since ML models amplify some elements of model risk. And although many banks, particularly those operating in jurisdictions with stringent regulatory requirements, have validation frameworks and practices in place to assess and mitigate the risks associated with traditional models, these are often insufficient to deal with the risks associated with machine-learning models. The added risk brought on by the complexity of algorithmic models can be mitigated by making well-targeted modifications to existing validation frameworks.
Conscious of the problem, many banks are proceeding cautiously, restricting the use of ML models to low-risk applications, such as digital marketing. Their caution is understandable given the potential financial, reputational, and regulatory risks. Banks could, for example, find themselves in violation of anti discrimination laws, and incur significant fines—a concern that pushed one bank to ban its HR department from using a machine-learning resume screener. A better approach, however, and ultimately the only sustainable one if banks are to reap the full benefits of machine-learning models, is to enhance model-risk management.
Regulators have not issued specific instructions on how to do this. In the United States, they have stipulated that banks are responsible for ensuring that risks associated with machine-learning models are appropriately managed, while stating that existing regulatory guidelines, such as the Federal Reserve’s “Guidance on Model Risk Management” (SR11-7), are broad enough to serve as a guide. Enhancing model-risk management to address the risks of machine-learning models will require policy decisions on what to include in a model inventory, as well as determining risk appetite, risk tiering, roles and responsibilities, and model life-cycle controls, not to mention the associated model-validation practices. The good news is that many banks will not need entirely new model-validation frameworks. Existing ones can be fitted for purpose with some well-targeted enhancements.
New Risk mitigation exercises for ML models
There is no shortage of news headlines revealing the unintended consequences of new machine-learning models. Algorithms that created a negative feedback loop were blamed for the “flash crash” of the British pound by 6 percent in 2016, for example, and it was reported that a self-driving car tragically failed to properly identify a pedestrian walking her bicycle across the street. The cause of the risks that materialized in these machine-learning models is the same as the cause of the amplified risks that exist in all machine-learning models, whatever the industry and application: increased model complexity. Machine-learning models typically act on vastly larger data sets, including unstructured data such as natural language, images, and speech. The algorithms are typically far more complex than their statistical counterparts and often require design decisions to be made before the training process begins. And machine-learning models are built using new software packages and computing infrastructure that require more specialized skills. The response to such complexity does not have to be overly complex, however. If properly understood, the risks associated with machine-learning models can be managed within banks’ existing model-validation frameworks
Here are the strategic approaches for enterprises to ensure that that the specific risks associated with machine learning are addressed :
Demystification of “Black Boxes” : Machine-learning models have a reputation of being “black boxes.” Depending on the model’s architecture, the results it generates can be hard to understand or explain. One bank worked for months on a machine-learning product-recommendation engine designed to help relationship managers cross-sell. But because the managers could not explain the rationale behind the model’s recommendations, they disregarded them. They did not trust the model, which in this situation meant wasted effort and perhaps wasted opportunity. In other situations, acting upon (rather than ignoring) a model’s less-than-transparent recommendations could have serious adverse consequences.
The degree of demystification required is a policy decision for banks to make based on their risk appetite. They may choose to hold all machine-learning models to the same high standard of interpretability or to differentiate according to the model’s risk. In USA, models that determine whether to grant credit to applicants are covered by fair-lending laws. The models therefore must be able to produce clear reason codes for a refusal. On the other hand, banks might well decide that a machine-learning model’s recommendations to place a product advertisement on the mobile app of a given customer poses so little risk to the bank that understanding the model’s reasons for doing so is not important. Validators need also to ensure that models comply with the chosen policy. Fortunately, despite the black-box reputation of machine-learning models, significant progress has been made in recent years to help ensure their results are interpretable. A range of approaches can be used, based on the model class:
Linear and monotonic models (for example, linear-regression models): linear coefficients help reveal the dependence of a result on the output. Nonlinear and monotonic models, (for example, gradient-boosting models with monotonic constraint): restricting inputs so they have either a rising or falling relationship globally with the dependent variable simplifies the attribution of inputs to a prediction. Nonlinear and nonmonotonic (for example, unconstrained deep-learning models): methodologies such as local interpretable model-agnostic explanations or Shapley values help ensure local interpretability.
Bias : A model can be influenced by four main types of bias: sample, measurement, and algorithm bias, and bias against groups or classes of people. The latter two types, algorithmic bias and bias against people, can be amplified in machine-learning models. For example, the random-forest algorithm tends to favor inputs with more distinct values, a bias that elevates the risk of poor decisions. One bank developed a random-forest model to assess potential money-laundering activity and found that the model favored fields with a large number of categorical values, such as occupation, when fields with fewer categories, such as country, were better able to predict the risk of money laundering.
To address algorithmic bias, model-validation processes should be updated to ensure appropriate algorithms are selected in any given context. In some cases, such as random-forest feature selection, there are technical solutions. Another approach is to develop “challenger” models, using alternative algorithms to benchmark performance. To address bias against groups or classes of people, banks must first decide what constitutes fairness. Four definitions are commonly used, though which to choose may depend on the model’s use: Demographic blindness: decisions are made using a limited set of features that are highly uncorrelated with protected classes, that is, groups of people protected by laws or policies. Demographic parity: outcomes are proportionally equal for all protected classes. Equal opportunity: true-positive rates are equal for each protected class. Equal odds: true-positive and false-positive rates are equal for each protected class. Validators then need to ascertain whether developers have taken the necessary steps to ensure fairness. Models can be tested for fairness and, if necessary, corrected at each stage of the model-development process, from the design phase through to performance monitoring.
Feature engineering : is often much more complex in the development of machine-learning models than in traditional models. There are three reasons why. First, machine-learning models can incorporate a significantly larger number of inputs. Second, unstructured data sources such as natural language require feature engineering as a preprocessing step before the training process can begin. Third, increasing numbers of commercial machine-learning packages now offer so-called AutoML, which generates large numbers of complex features to test many transformations of the data. Models produced using these features run the risk of being unnecessarily complex, contributing to overfitting. For example, one institution built a model using an AutoML platform and found that specific sequences of letters in a product application were predictive of fraud. This was a completely spurious result caused by the algorithm’s maximizing the model’s out-of-sample performance.
In feature engineering, banks have to make a policy decision to mitigate risk. They have to determine the level of support required to establish the conceptual soundness of each feature. The policy may vary according to the model’s application. For example, a highly regulated credit-decision model might require that every individual feature in the model be assessed. For lower-risk models, banks might choose to review the feature-engineering process only: for example, the processes for data transformation and feature exclusion. Validators should then ensure that features and/or the feature-engineering process are consistent with the chosen policy. If each feature is to be tested, three considerations are generally needed: the mathematical transformation of model inputs, the decision criteria for feature selection, and the business rationale. For instance, a bank might decide that there is a good business case for using debt-to-income ratios as a feature in a credit model but not frequency of ATM usage, as this might penalize customers for using an advertised service.
Hyper parameters : Many of the parameters of machine-learning models, such as the depth of trees in a random-forest model or the number of layers in a deep neural network, must be defined before the training process can begin. In other words, their values are not derived from the available data. Rules of thumb, parameters used to solve other problems, or even trial and error are common substitutes. Decisions regarding these kinds of parameters, known as hyper parameters, are often more complex than analogous decisions in statistical modeling. Not surprisingly, a model’s performance and its stability can be sensitive to the hyper parameters selected. For example, banks are increasingly using binary classifiers such as support-vector machines in combination with natural-language processing to help identify potential conduct issues in complaints. The performance of these models and the ability to generalize can be very sensitive to the selected kernel function.Validators should ensure that hyper parameters are chosen as soundly as possible. For some quantitative inputs, as opposed to qualitative inputs, a search algorithm can be used to map the parameter space and identify optimal ranges. In other cases, the best approach to selecting hyperparameters is to combine expert judgment and, where possible, the latest industry practices.
Production readiness : Traditional models are often coded as rules in production systems. Machine-learning models, however, are algorithmic, and therefore require more computation. This requirement is commonly overlooked in the model-development process. Developers build complex predictive models only to discover that the bank’s production systems cannot support them. One US bank spent considerable resources building a deep learning–based model to predict transaction fraud, only to discover it did not meet required latency standards. Validators already assess a range of model risks associated with implementation. However, for machine learning, they will need to expand the scope of this assessment. They will need to estimate the volume of data that will flow through the model, assessing the production-system architecture (for example, graphics-processing units for deep learning), and the runtime required.
Dynamic model calibration : Some classes of machine-learning models modify their parameters dynamically to reflect emerging patterns in the data. This replaces the traditional approach of periodic manual review and model refresh. Examples include reinforcement-learning algorithms or Bayesian methods. The risk is that without sufficient controls, an overemphasis on short-term patterns in the data could harm the model’s performance over time. Banks therefore need to decide when to allow dynamic recalibration. They might conclude that with the right controls in place, it is suitable for some applications, such as algorithmic trading. For others, such as credit decisions, they might require clear proof that dynamic recalibration outperforms static models. With the policy set, validators can evaluate whether dynamic recalibration is appropriate given the intended use of the model, develop a monitoring plan, and ensure that appropriate controls are in place to identify and mitigate risks that might emerge. These might include thresholds that catch material shifts in a model’s health, such as out-of-sample performance measures, and guardrails such as exposure limits or other, predefined values that trigger a manual review.
Banks will need to proceed gradually. The first step is to make sure model inventories include all machine learning–based models in use. One bank’s model risk-management function was certain the organization was not yet using machine-learning models, until it discovered that its recently established innovation function had been busy developing machine-learning models for fraud and cyber security.
From here, validation policies and practices can be modified to address machine-learning-model risks, though initially for a restricted number of model classes. This helps build experience while testing and refining the new policies and practices. Considerable time will be needed to monitor a model’s performance and finely tune the new practices. But over time banks will be able to apply them to the full range of approved machine-learning models, helping companies mitigate risk and gain the confidence to start harnessing the full power of machine learning.
(AIQRATE, A bespoke global AI advisory and consulting firm. A first in its genre, AIQRATE provides strategic AI advisory services and consulting offerings across multiple business segments to enable clients on their AI powered transformation & innovation journey and accentuate their decision making and business performance.
AIQRATE works closely with Boards, CXOs and Senior leaders advising them on navigating their Analytics to AI journey with the art of possible or making them jump start to AI progression with AI@scale approach followed by consulting them on embedding AI as core to business strategy within business functions and augmenting the decision-making process with AI. We have proven bespoke AI advisory services to enable CXO’s and Senior Leaders to curate & design building blocks of AI strategy, embed AI@scale interventions and create AI powered organizations. AIQRATE’s path breaking 50+ AI consulting frameworks, assessments, primers, toolkits and playbooks enable Indian & global enterprises, GCCs, Startups, VC/PE firms, and Academic Institutions enhance business performance and accelerate decision making.
Visit www.aiqrate.ai to experience our AI advisory services & consulting offerings
Related Posts
AIQRATIONS
Webinar on AI Strategy at BITS Pilani
Add Your Heading Text Here
Continuing with the webinar series for the batch of 2021 & 2022 along with our noteworthy alumni, BITS Pilani called-in AIQRATE for a session is on ‘AI Strategy : The new next in Transformation and Innovation’ by Mr Sameer Dhanrajani , on 26th August 2020.
Related Posts
AIQRATIONS
Webinar on AI & Analytics – Chitkara Business School
Add Your Heading Text Here
Related Posts
AIQRATIONS
Emergence of AI Powered Enterprise: Strategic considerations for Leaders
Add Your Heading Text Here
The excitement around artificial intelligence is palpable. It seems that not a day goes by without one of the giants in the industry coming out with a breakthrough application of this technology, or a new nuance is added to the overall body of knowledge. Horizontal and industry-specific use cases of AI abound and there is always something exciting around the corner every single day.
However, with the keen interest from global leaders of multinational corporations, the conversation is shifting towards having a strategic agenda for AI in the enterprise. Business heads are less interested in topical experiments and minuscule productivity gains made in the short term. They are more keen to understand the impact of AI in their areas of work from a long-term standpoint. Perhaps the most important question that they want to see answered is – what will my new AI-enabled enterprise look like? The question is as strategic as it is pertinent. For business leaders, the most important issues are – improving shareholder returns and ensuring a productive workforce – as part of running a sustainable, future-ready business. Artificial intelligence may be the breakout technology of our time, but business leaders are more occupied with trying to understand just how this technology can usher in a new era of their business, how it is expected to upend existing business value chains, unlock new revenue streams, and deliver improved efficiencies in cost outlays. In this article, let us try to answer these questions.
AI is Disrupting Existing Value Chains
Ever since Michael Porter first expounded on the concept in his best-selling book, Competitive Advantage: Creating and Sustaining Superior Performance, the concept of the value chain has gained great currency in the minds of business leaders globally. The idea behind the value chain was to map out the inter linkages between the primary activities that work together to conceptualize and bring a product / service to market (R&D, manufacturing, supply chain, marketing, etc.), as well as the role played by support activities performed by other internal functions (finance, HR, IT etc.). Strategy leaders globally leverage the concept of value chains to improve business planning, identify new possibilities for improving business efficiency and exploit potential areas for new growth.
Now with AI entering the fray, we might see new vistas in the existing value chains of multinational corporations. For instance:
- Manufacturing is becoming heavily augmented by artificial intelligence and robotics. We are seeing these technologies getting a stronger foothold across processes requiring increasing sophistication. Business leaders need to now seriously consider workforce planning for a labor force that consists both human and artificial workers at their manufacturing units. Due attention should also be paid in ensuring that both coexist in a symbiotic and complementary manner.
- Logistics and Delivery are two other areas where we are seeing a steady growth in the use of artificial intelligence. Demand planning and fulfilment through AI has already reached a high level of sophistication at most retailers. Now Amazon – which handles some of the largest and most complex logistics networks in the world – is in advanced stages of bringing in unmanned aerial vehicles (drones) for deliveries through their Amazon Prime Air program. Business leaders expect outcomes to range from increased customer satisfaction (through faster deliveries) and reduction in costs for the delivery process.
- Marketing and Sales are constantly on the forefront for some of the most exciting inventions in AI. One of the most recent and evolved applications of AI is Reactful. A tool developed for eCommerce properties, Reactful helps drive better customer conversions by analyzing the clickstream and digital footprints of people who are on web properties and persuades them into making a purchase. Business leaders need to explore new ideas such as this that can help drive meaningful engagement and top line growth through these new AI-powered tools.
AI is Enabling New Revenue Streams
The second way business leaders are thinking strategically around AI is for its potential to unlock new sources of revenue. Earlier, functions such as internal IT were seen as a cost center. In today’s world, due to the cost and competitive pressure, areas of the business which were traditionally considered to be cost centers are require to reinvent themselves into revenue and profit centers. The expectation from AI is no different. There is a need to justify the investments made in this technology – and find a way for it to unlock new streams of revenue in traditional organizations. Here are two key ways in which business leaders can monetize AI:
- Indirect Monetization is one of the forms of leveraging AI to unlock new revenue streams. It involves embedding AI into traditional business processes with a focus on driving increased revenue. We hear of multiple companies from Amazon to Google that use AI-powered recommendation engines to drive incremental revenue through intelligent recommendations and smarter bundling. The action item for business leaders is to engage stakeholders across the enterprise to identify areas where AI can be deeply ingrained within tech properties to drive incremental revenue.
- Direct Monetization involves directly adding AI as a feature to existing offerings. Examples abound in this area – from Salesforce bringing in Einstein into their platform as an AI-centric service to cloud infrastructure providers such as Amazon and Microsoft adding AI capabilities into their cloud offerings. Business leaders should brainstorm about how AI augments their core value proposition and how it can be added into their existing product stack.
AI is Bringing Improved Efficiencies
The third critical intervention for a new AI-enabled enterprise is bringing to the fore a more cost-effective business. Numerous topical and early-stage experiments with AI have brought interesting success for reducing the total cost of doing business. Now is the time to create a strategic roadmap for these efficiency-led interventions and quantitatively measure their impact to business. Some food for thought for business leaders include:
- Supply Chain Optimization is an area that is ripe for AI-led disruption. With increasing varieties of products and categories and new virtual retailers arriving on the scene, there is a need for companies to reduce their outlay on the network that procures and delivers goods to consumers. One example of AI augmenting the supply chain function comes from Evertracker – a Hamburg-based startup. By leveraging IOT sensors and AI, they help their customers identify weaknesses such as delays and possible shortages early, basing their analysis on internal and external data. Business leaders should scout for solutions such as these that rely on data to identify possible tweaks in the supply chain network that can unlock savings for their enterprises.
- Human Resources is another area where AI-centric solutions can be extremely valuable to drive down the turnaround time for talent acquisition. One such solution is developed by Recualizer – which reduces the need for HR staff to scan through each job application individually. With this tool, talent acquisition teams need to first determine the framework conditions for a job on offer, while leaving the creation of assessment tasks to the artificial intelligence system. The system then communicates the evaluation results and recommends the most suitable candidates for further interview rounds. Business leaders should identify such game-changing solutions that can make their recruitment much more streamlined – especially if they receive a high number of applications.
- The Customer Experience arena also throws up very exciting AI use cases. We have now gone well beyond just bots answering frequently asked questions. Today, AI-enabled systems can also provide personalized guidance to customers that can help organizations level-up on their customer experience, while maintaining a lower cost of delivering that experience. Booking.com is a case in point. Their chatbot helps customers identify interesting activities and events that they can avail of at their travel destinations. Business leaders should explore such applications that provide the double advantage of improving customer experience, while maintaining strong bottom-line performance.
The possibilities for the new AI-enabled enterprises are as exciting as they are varied. The ideas shared are by no means exhaustive, but hopefully seed in interesting ideas for powering improved business performance. Strategy leaders and business heads need to consider how their AI-led businesses can help disrupt their existing value chains for the better, and unlock new ideas for improving bottom-line and top-line performance. This will usher in a new era of the enterprise, enabled by AI.
(AIQRATE, A bespoke global AI advisory and consulting firm. A first in its genre, AIQRATE provides strategic AI advisory services and consulting offerings across multiple business segments to enable clients on their AI powered transformation & innovation journey and accentuate their decision making and business performance.
AIQRATE works closely with Boards, CXOs and Senior leaders advising them on navigating their Analytics to AI journey with the art of possible or making them jump start to AI progression with AI@scale approach followed by consulting them on embedding AI as core to business strategy within business functions and augmenting the decision-making process with AI. We have proven bespoke AI advisory services to enable CXO’s and Senior Leaders to curate & design building blocks of AI strategy, embed AI@scale interventions and create AI powered organizations. AIQRATE’s path breaking 50+ AI consulting frameworks, assessments, primers, toolkits and playbooks enable Indian & global enterprises, GCCs, Startups, SMBs, VC/PE firms, and Academic Institutions enhance business performance and accelerate decision making.
Visit www.aiqrate.ai to experience our AI advisory services & consulting offerings )