HOW AI CAN ENABLE ENTERPRISES TO IMPLEMENT GENERAL DATA PROTECTION REGULATION (GDPR) POLICIES
Add Your Heading Text Here
The GDPR General Data Protection Regulation (GDPR), which goes into effect May 25, 2018, requires all companies that collect data on citizens in EU countries to provide a “reasonable” level of protection for personal data. The ramifications for non-compliance are significant, with fines of up to 4% of a firm’s global revenues.
This European Union’s sweeping new data privacy law, is triggering a lot of sleepless nights for CIOs grappling with how to effectively comply with the new regulations and help their organizations avoid potentially hefty penalties.
Will AI be the only answer to the highly regulated GDPR to come?
The bar for GDPR compliance is set high. The regulation broadly interprets what constitutes personal data, covering everything from basic identity information to web data such as IP addresses and cookies, along with more personal artifacts including biometric data, sexual orientation, and even political opinions. The new regulation mandates, among other things, that personal data be erased if deemed unnecessary. Maintaining compliance over such a broad data set is all the more challenging when it is distributed among on-premises data centers, cloud offerings, and business partner systems.
The complexity of the problem has made GDPR a top data protection priority. A PwC survey found that 77% of U.S. organizations plan to spend $1 million or more to meet GDPR requirements. An Ovum report found that two-thirds of U.S. companies believe they will have to modify their global business strategies to accommodate new data privacy laws, and over half are expecting to face fines for non-compliance with the pending GDPR legislation.
This begs the question: Can AI help organizations meet the GDPR’s compliance deadline and avoid penalties? After all, AI is all about handling and deriving insights from vast amounts of data, and GDPR demands that organizations comb through their databases for rafts of personal information that falls under GDPR’s purview. The answer is not only in the affirmative, but there are several significant instances where AI solutions to regulation compliance and governance are already on the high.
For example, Informatica is utilizing advances in artificial intelligence (AI) to help their organizations improve visibility and control over geographically dispersed data. It will provide companies with a holistic, intelligent, and automated approach to governance, for the challenges posed by GDPR.
AI interventions in Data Regulation Compliance and Governance
Data location Discovery and PII Management
It’s essential to learn the location of all customer data in all systems. The first action a company need to do is creating a risk assessment with a guess about what kind of data is likely to be requested how many requests might be expected. Locating all customer data and ensuring GDPR compliant management can be a daunting task, but there are options for automating those processes.
With AI, one can quite easily recognize concepts like ‘person names,’ which is important in this context. To find out how many documents you have that refer to persons (as opposed to companies), or to find out how many documents, social security numbers, phone numbers you have in any one repository, one can combine those analytics, and then begin to understand that the odds are that they have a lot of personal data in this repository, which provides a way to prioritize in the context of GDPR.
For example, M-Files uses Artificial Intelligence to streamline the process of locating and managing PII (personally identifiable information), which often resides in a host of different systems, network folders and other information silos, making it even more challenging for companies to control and protect it.
AI based data cataloguing
A solution that utilizes AI-based machine learning techniques to improve tracking and cataloging data across hybrid deployments can help companies do more accurate reporting while boosting overall efforts to achieve GDPR compliance. By automating the process of discovering and properly recording all types of data and data relationships, organizations can develop a comprehensive view of compliance-related data tucked away in non-traditional sources such as email, social media, and financial transactions – a near-impossible task using traditional solutions and manual processes.
Contextual Engines for Diversely Changing Data Environments
The GDPR changes how companies should look at storage of data. The risk of data getting compromised is increased based on how is stored, in how many different systems it’s stored, how many people are involved in that process, and how long it’s kept. Now that PII on job applications is regulated under GDPR, a company may want to routinely get rid of that data fairly quickly to avoid risk of data breach or audit. There are those kinds of procedural things that organizations will have to really think about.
There are instances where completely removing all data is impossible. You have to retain some data like billing records and there might be conflicting regulations, such as records retention laws. Now, if the citizen asks you to remove that, it’s going to add a lot of complexity to the process, in terms of understanding what data can be removed from the system and what cannot be removed. There will be conflicting situations where this regulation says something, and then you might have an Accounting Act or something in a local or state regulation that says something else.
This requires contextual engines built using AI that can be highly context aware based on the changing circumstances around the data and create a plan of how each data should be stored, managed and purged. This can also provide accurate insights on the levels of encryption and complex data storage techniques that need to be implemented for different data, thereby conserving hardware resources and increasing protection against malignant attacks and data breaches while minimizing risk of GDPR violations.
Working out the Kinks in AI led GDPR
GDPR aims to give EU citizens greater control over their personal data and to hold companies accountable on matters such as data use consent, data anonymization, breach notification, cross-border data transfer, and appointment of data protection officers. For example, organizations will have to honor individuals’ “right to be forgotten,” where applicable — fulfilling requests to delete information and providing proof that it was done. They must also obtain explicit, rather than implied, permission to gather data. And they are required to allow people to see their own data in a commonly readable format.
The system will undoubtedly work those issues out, but, in the meantime, companies should roll up their sleeves and take a thorough, systematic multi-step approach. The multi-step strategy should include:
Data. A comprehensive plan to document and categorize the personal data an organization has, where it came from, and who it is shared with.
Privacy notices. A review of privacy notices to align with new GDPR requirements.
Individuals’ rights. People have enhanced rights, such as the right to be forgotten, and new rights, such as data portability. This demands a check of procedures, processes, and data formats to ensure the new terms can be met.
Legal basis for processing personal data. Companies will need to document the legal basis for processing personal data, in privacy notices and other places.
Consent. Companies should review how they obtain and record consent, as they will be required to document it. Consent must be a positive indication; it cannot be inferred. An audit trail is necessary.
Children. There will be new safeguards for children’s data. Companies will need to establish systems to verify individuals’ ages and gather parental or guardian consent for data-processing activity.
Data breaches. New breach notification rules and new fines will affect many organizations, making it essential to understand how to detect, report, and investigate personal data breaches.
Privacy by design. A privacy by design and data minimization approach will become an express legal requirement. It’s important for organizations to plan how to meet the new terms.
Data protection officers. Organizations may need to designate a data protection officer and figure out who will take responsibility for compliance and how they will position the role.
Will GDPR Aligning Measures Be Necessarily Disruptive?
Many companies are going through significant changes as a result of the new regulations, and the efficiency and speed the AI-powered regulation compliance platform offer can significantly help streamline the entire process if companies want to ensure compliance.
Hence, there are plenty of challenges keeping CIOs up at night. By taking a more intelligence-driven approach to data discovery, preparation, management, and governance, the impending GDPR mandate doesn’t have to be one of them.
Related Posts
AIQRATIONS
Data Glut to Data Abundance; The Fight for Data Supremacy – Enter the Age of Algorithm Ascendancy
Add Your Heading Text Here
The definition of Data Breaches in current times have evolved from, happening under ‘malicious intent’, to also cover those which have been occurring as a consequences of bad data policies and regulation oversight. This means even policies that have been deemed legally screened might end up, in certain circumstances, in opening doors to some significant breach of data, user privacy and ultimately user trust.
For example, recently, Facebook banned data analytics company Cambridge Analytica from buying ads from its platform. The voter profiling firm allegedly procured 50 million physiological profiles of people through a research application developer Aleksandr Kogan, who broke Facebook’s data policies by sharing data from his personality-prediction app, that mined information from the social network’s users.
Kogan’s app, ‘thisisyourdigitallife’ harvested data not only from the individuals participating in the game, but also from everyone on their friend list. Since Facebook’s terms of services weren’t so clear back in 2014 the app allowed Kogan to share the data with third parties like Cambridge Analytica. This means policy wise it is a grey area whether the breach could be considered ‘unauthorized’, but it is clear that it happened without any express authorization from Facebook. This personal information was subsequently used to target voters and sway public opinion
This is different than the site hackings where credit card information was actually stolen at major retailers, the company in question, Cambridge Analytica, actually had the right to use this data. The problem is they used this information without permission in a way that was overtly deceptive to both Facebook users and Facebook itself.
Fallouts of Data Breaches: Developers left to deal with Tighter Controls
Facebook will become less attractive to app developers if it tightens norms for data usage as a fallout of the prevailing controversy over alleged misuse of personal information mined from its platform, say industry members.
India has the second largest developer base for Facebook, a community that builds apps and games on the platform and engage its users. With 241 million users, the country last July over took the US as the largest userbase for the social network platform.
There will be more scrutiny now. When you do, say, a sign on. The basic data (you can get) is the user’s name and email address, even which will undergo tremendous scrutiny before they approve it. That will have an impact on the timeline. The viral effect) could decrease. Now, without explicit rights from users, you cannot reach out to his/her contacts. Thus, the overhead goes on to the developers because of such data breaches, which shouldn’t have occurred in the first place had the policies surrounding user data were more distinct and clear.
Renewed Focus to Conflicting Data Policies and Human Factors
These kinds of passive breaches that happen because of unclear and conflicting policies instituted by Facebook provides us a very clear example of how active breaches (involving malicious attacks) and passive breaches (involving technically authorized but legally unsavoury data sharing) need to be given equal priority and should both be considered pertinent focus of data protection.
While Facebook CEO Mark Zuckerberg has vowed to make changes to prevent these types of information grabs from happening in the future, many of those tweaks will be presumably made internally. Individuals and companies still need to take their own action to ensure their information remains as protected and secure as possible.
Humans are the weakest link in data protection, and potentially even the leading cause for the majority of incidents in recent years. This debacle demonstrates that cliché to its full extent. Experts believe that any privacy policy needs to take into account all third parties who get access to the data too. While designing a privacy policy one needs to keep the entire ecosystem in mind. For instance, a telecom player or a bank while designing their privacy policy will have to take into account third parties like courier agencies, teleworking agencies, and call centers who have access to all their data and what kind of access they have.
Dealing with Privacy in Analytics: Privacy-Preserving Data Mining Algorithms
The problem of privacy-preserving data mining has become more important in recent years because of the increasing ability to store personal data about users, and the increasing sophistication of data mining algorithms to leverage this information. A number of algorithmic techniques such as randomization and k-anonymity, have been suggested in recent years in order to perform privacy-preserving data mining. Different communities have explored parallel lines of work in regards to privacy preserving data mining:
Privacy-Preserving Data Publishing: These techniques tend to study different transformation methods associated with privacy. These techniques include methods such as randomization, k-anonymity, and l-diversity. Another related issue is how the perturbed data can be used in conjunction with classical data mining methods such as association rule mining.
Changing the results of Data Mining Applications to preserve privacy: In many cases, the results of data mining applications such as association rule or classification rule mining can compromise the privacy of the data. This has spawned a field of privacy in which the results of data mining algorithms such as association rule mining are modified in order to preserve the privacy of the data.
Query Auditing: Such methods are akin to the previous case of modifying the results of data mining algorithms. Here, we are either modifying or restricting the results of queries.
Cryptographic Methods for Distributed Privacy: In many cases, the data may be distributed across multiple sites, and the owners of the data across these different sites may wish to compute a common function. In such cases, a variety of cryptographic protocols may be used in order to communicate among the different sites, so that secure function computation is possible without revealing sensitive information.
Privacy Engineering with AI
Privacy by Design is a policy concept was introduced the Data Commissioner’s Conference in Jerusalem, and over 120 different countries agreed they should contemplate privacy in the build, in the design. That means not just the technical tools you buy and consume, [but] how you operationalize, how you run your business; how you organize around your business and data.
Privacy engineering is using the techniques of the technical, the social, the procedural, the training tools that we have available, and in the most basic sense of engineering to say, “What are the routinized systems? What are the frameworks? What are the techniques that we use to mobilize privacy-enhancing technologies that exist today, and look across the processing lifecycle to build in and solve for privacy challenges?”
It’s not just about individual machines making correlations; it’s about different data feeds streaming in from different networks where you might make a correlation that the individual has not given consent to with personally identifiable information. For AI, it is just sort of the next layer of that. We’ve gone from individual machines, networks, to now we have something that is looking for patterns at an unprecedented capability, that at the end of the day, it still goes back to what is coming from what the individual has given consent to? What is being handed off by those machines? What are those data streams?
Also, there is the question of ‘context’. The simplistic policy of asking users if an application can access different venues of their data is very reductive. This does not, in an measure give an understanding of how that data is going to be leveraged and what other information about the users would the application be able to deduce and mine from the said data? The concept of privacy is extremely sensitive and not only depends on what data but also for what purpose. Have you given consent to having it used for a particular purpose? So, I think AI could play a role in making sense of whether data is processed securely.
The Final Word: Breach of Privacy as Crucial as Breach of Data
It is undeniably so that we are slowly giving equal, if not more importance to breach of privacy as compared to breach of data, which will eventually target even the policies which though legally acceptable or passively mandated but resulted in compromise of privacy and loss of trust. Because there is no point claiming one is legally safe in their policy perusal if the end result leads to the users being at the receiving end.
This would require a comprehensive analysis of data streams, not only internal to an application ecosystem, like Facebook, but also the extended ecosystem involving all the players it is channeling the data sharing to, albeit in a policy-protected manner. This will require AI enabled contextual decision making to come to terms as what policies could be considered as eventually breaching the privacy in certain circumstances.
Longer-term, though, you’ve got to write that ombudsman. We need to be able to engineer an AI to serve as an ombudsman for the AI itself.
Related Posts
AIQRATIONS
AI & FINTECH – TWO GAME CHANGING REVOLUTIONS IN THE DIGITAL ERA
Add Your Heading Text Here
More investors are setting their sights on the financial technology (Fintech) arena. According to consulting firm Accenture, investment in Fintech firms rose by 10 percent worldwide to the tune of $23.2 billion in 2016.
China is leading the charge after securing $10 billion in investments in 55 deals which account for 90 percent of investments in Asia-Pacific. The US came second taking in $6.2 billion in funding. Europe, also saw an 11 percent increase in deals despite Britain seeing a decrease in funding due to the uncertainty from the Brexit vote.
The excitement stems from the disruption of traditional financial institutions (FIs) such as banks, insurance, and credit companies by technology. The next unicorn might be among the hundreds of tech startups that are giving Fintech a go.
What exactly is going to be the next big thing has yet to be determined, but artificial intelligence (AI) will play a huge part.
Stiffening competition
The growing reality is that, while opportunities are abound, competition is also heating up.
Take, for example, the number of Fintech startups that aim to digitize routine financial tasks like payments. In the US, the digital wallet and payments segment is fiercely competitive. Pioneers like PayPal see themselves being taken on by other tech giants like Google and Apple, by niche-oriented ventures like Venmo, and even by traditional FIs.
Most recently, the California-based robo-advisor, Wealthfront, has added artificial intelligence capabilities to track account activity on its own product and other integrated services such as Venmo, to analyze and understand how account holders are spending, investing and making their financial decisions, in an effort to provide more customized advice to their customers. Sentient Technologies, which has offices in both California and Hong Kong, is using artificial intelligence to continually analyze data and improve investment strategies. The company has several other AI initiatives in addition to its own equity fund. AI is even being used for banking customer service. RBS has developed Luvo, a technology which assists it service agents in finding answers to customer queries. The AI technology can search through a database, but also has a human personality and is built to learn continually and improve over time.
Some ventures are seeing bluer oceans by focusing on local and regional markets where conditions are somewhat favorable.
The growth of China’s Fintech was largely made possible by the relative age of its current banking system. It was easier for people to use mobile and web-based financial services such as Alibaba’s Ant Financial and Tencent since phones were more pervasive and more convenient to access than traditional financial instruments.
In Europe, the new Payment Services Directive (PSD2) set to take effect in 2018 has busted the game wide open. Banks are obligated to open up their application program interfaces (APIs) enabling Fintech apps and services to tap into users’ bank accounts. The line between banks and fintech companies are set to blur so just about everyone in finance is set to compete with old and new players alike.
Leveraging Digital
Convenience has become a fundamental selling point to many users that a number of Fintech ventures have zeroed in on delivering better user experiences for an assortment of financial tasks such as payments, budgeting, banking, and even loan applications.
There is a mad scramble among companies to leverage cutting-edge technologies for competitive advantage. Even established tech companies like e-commerce giant Amazon had to give due attention to mobile as users shift their computing habits towards phones and tablets. Enterprises are also working on transitioning to cloud computing for infrastructure.
But where do more advanced technologies such as AI come in?
The drive to eliminate human fallibility has also made artificial intelligence (AI) driven to the forefront of research and development. Its applications range from sorting what gets shown on your social media newsfeed to self-driving cars. It’s also expected to have a major impact in Fintech due to potential of game changing insights that can be derived from the sheer volume of data that humanity is generating. Enterprising ventures are banking on it to expose the gap in the market that has become increasingly small due to competition.
All about algorithms
AI and finance are no strangers to each other. Traditional banking and finance have relied heavily on algorithms for automation and analysis. However, these were exclusive only to large and established institutions. Fintech is being aimed at empowering smaller organizations and consumers, and AI is expected to make its benefits accessible to a wider audience.
AI has a wide variety of consumer-level applications for smarter and more error-free user experiences. Personal finance applications are now using AI to balance people’s budgets based specifically to a user’s behavior. AI now also serves as robo-advisors to casual traders to guide them in managing their stock portfolios.
For enterprises, AI is expected to continue serving functions such as business intelligence and predictive analytics. Merchant services such as payments and fraud detection are also relying on AI to seek out patterns in customer behavior in order to weed out bad transactions.
People may soon have very little excuse of not having a handle of their money because of these services
Concerns Going Forward
While artificial intelligence holds the promise of efficiency, better decision-making, stronger compliance and potentially even more profits for investors, the technology is young. Banks need to find ways to lower costs and technology is the most obvious answer. A logical response by banks is to automate as much decision-making as possible, hence the number of banks enthusiastically embracing AI and automation. But the unknown risks inherent in aspects of AI have not been eliminated. According to a Euromoney Survey and report commissioned by Baker & McKenzie, out of 424 financial professionals, 76% believe that financial regulators are not up to speed on AI and 47% are not confident that their own organizations understand the risks of using AI. Additionally an increasing reliance on artificial intelligence technologies comes with a reduction in jobs. Many argue that the human intuition plays a valuable role in risk assessment and that the black box nature of AI makes it difficult to understand certain unexpected outcomes or decisions produced by the technology.
Towards the future
With the stiff competition in Fintech, ventures have to deliver a truly valuable products and services in order to stand out. The venture that provides the best user experience often wins but finding this X factor has become increasingly challenging.
The developments in AI may provide that something extra especially if it could promise to eliminate the guess work and human error out of finance. It’s for these reasons that AI might just hold the key to what further Fintech innovations can be made.