Artificial Intelligence Apps: Associated Privacy Risks and How to Address Them
A few years ago, many people were quick to take part in the viral social media trend of uploading photos of themselves into FaceApp, an Artificial Intelligence (AI) app that applied a filter to age the photos of users’ faces. However, what appeared to be a harmless social media trend posed a “potential counterintelligence threat,” according to the Federal Bureau of Investigation (FBI) as the app was developed in Russia. While FaceApp denies any ties with the Russian government, the app still presents privacy concerns as the company “holds rights over its users’ app-generated photos.” As AI apps, such as FaceApp, become more commonplace, it is important that users understand the potential risks that these types of apps pose to their personal information and privacy.
- Artificial Intelligence, or AI for short, has many different definitions. The most simplistic of these is a set of technologies that enable function to resemble human reasoning and conclusions.
AI is increasingly being incorporated into apps marketed to individuals and corporate entities for a variety of uses, such as healthcare, travel and navigation, and social media content. Over the years, AI has evolved from its most basic form of returning search results on a user’s preferred search engine to more advanced functions, such as a cellular device notifying a user of the suggested route and estimated time of arrival (ETA) to a frequently visited destination. AI is also being tested in a variety of sectors, to include the medical field. Apps such as Binah.ai claim to help you understand your vitals with the promise of “health checks anywhere” with just the use of your camera.AI use in these forms requires vast amounts of data collection to ensure an AI algorithm's success and functionality. To effectively reach its goal of “resembling human reasoning,” AI apps must collect data from individuals to train and enhance these algorithms. Many apps’ terms and conditions lay out the company policy on what data is collected and what will be done with it. Apps such as TikTok (as stated in their privacy policy) collect troves of user data, such as keystrokes and biometric data to include faceprints and voiceprints, and video watch time to enhance the recommendation algorithm.As AI continues to become more prevalent in our lives, it is important to understand the privacy risks associated with these systems. Some of these concerns include:
- Surveillance – Many AI-powered algorithms rely on vast amounts of personally identifiable information (PII) to train and enhance their algorithms. Uploaded user images, like those uploaded to FaceApp, can potentially be used as training data for facial recognition AI algorithms. China is one country that has a history of using these algorithms to keep tabs on its citizens. Such surveillance methods could be mimicked or expanded upon globally.
- Hacking – Hackers may target companies who utilize AI, as these companies are therefore likely to store large amounts of customer data. A breach of these databases could expose customers’ PII. For example, in March 2022, a healthcare company that relies on AI to inform patients of patient care guidelines disclosed a breach that affected roughly 1.1 million patients.
- Data Spillover – This issue involves a collection of data from people that are not the intended subject of the collection. For example, Amazon Alexa devices are known to continue recording a few seconds after tasks have been completed to help improve device function, which means that parts of people's private conversations are stored and potentially heard by Amazon employees.
Knowing the risks that AI presents, here are some best practices that individuals and companies can adopt to protect their privacy:
- Read privacy policies and terms of service: Taking the time to understand where your data is going and what is being used for can save you from headaches in the long term. A quick CTRL + F function search of keywords such as “data” and “access” could help you identify relevant sections of the terms of service to focus on. Understanding the data collection and use policies can help you make an informed decision on whether you want to download and use that app.
- Limit the amount of PII that you provide: Apps will typically prompt you to enter PII such as your name, date of birth, contact information, or a photo. Pay attention to which items are required to be entered vs. which ones are optional and provide only the necessary information. Consider providing false information if legally permitted and if it will not affect the functionality of the app, so that if a data breach occurs your true PII is not exposed.
- Research the app before downloading and using it: Check for media coverage or reviews of the app that discuss privacy concerns or data breaches for the app and its developer.
Utilize multi-factor authentication (MFA) on all apps and accounts: Always utilize MFA when it is offered. This is a key step in securing accounts and preventing unauthorized access, should there be a breach.
Related Posts
3 Risks Your Corporate Insider Threat Program May be Overlooking
Smart Home Devices: Concerns, Vulnerabilities, and How to Address Them
Global Cyber Threat, Local Vulnerability, Your Resilience
Let's discuss your security.
Partner with Red5 for unmatched intelligence and analysis expertise tailored to your needs.