TEGWAR, AI and the FTC – Gov’t Agency Warns of Deceptive AI Contract Language
2024-2-22 22:0:36 Author: securityboulevard.com(查看原文) 阅读量:6 收藏

In the 1973 baseball movie “Bang the Drum Slowly,” the players for the New York Mets—um, Mammoths—spend their off time playing a card game with fans called “TEGWAR,” which stands for “The Exciting Game Without Any Rules.” The Federal Trade Commission recently warned against a new form of TEGWAR—contract language snuck into terms of service, terms of use or privacy policies notifying consumers or data subjects about their AI data collection and use policies through secret changes in contracts. This is particularly true when the changes to a privacy policy purport to be retroactive—allowing the company to use data previously collected for one purpose for an entirely different purpose without effective notice and consent to the new proposed use.

The Nature of Online Contracts

In the digital age, agreements govern our every online interaction. We click “Agree” to website terms, sign up for apps and purchase services often without a second thought. These online contracts, however, rarely resemble traditional agreements. They are often contracts of adhesion, presented as “take it or leave it” propositions, drafted by powerful companies with little room for negotiation on behalf of users.

One defining characteristic of online contracts is their inaccessibility. Buried within lengthy legalese and presented in click-through formats, they are rarely read by users. A 2019 study found that less than 2% of users actually read website terms and conditions. This lack of engagement creates a dangerous power imbalance, leaving users unaware of limitations on their rights, data usage and potential liabilities. Put simply, people don’t read terms and conditions, end user license agreements or privacy policies. To the extent that data collection policies and practices are dictated by consumer assets to such practices, the clickwrap notice is what most companies use to get that consent.

In the EU and many other countries, however, merely clicking “I agree” on a website is not sufficient to render a data practice legal. The data collection practices must be reasonable and the information collected must be necessary for those practices and only retained for a limited period of time.

Further complicating matters is the dynamic nature of online contracts. Many platforms update their terms regularly, often with little user notification. These changes can significantly alter the agreement, introducing new restrictions or data collection practices. Notably, clicking “Continue” or using the service after updates are made is often interpreted as consent, despite a user’s lack of actual awareness. As a result, even for that small percentage of people who read online contracts and privacy policies, their terms may change over time, and data collected under one privacy regime may thereafter be used for purposes for which the consumer never gave effective consent.

This issue of implicit consent came to light in the FTC’s 2016 settlement with Snapchat. The company had changed its privacy policy to allow location sharing without explicit user consent. The FTC challenged this “dark pattern” tactic, ultimately resulting in a $5 million fine to Snapchat and stricter user notification requirements. Similarly, in 2019, the FTC settled with YouTube for collecting children’s data without parental consent, violating the Children’s Online Privacy Protection Act (COPPA). In 2023, the FTC settled a case with a genetic testing company, 1Health, related to its retroactive changes to privacy policies, which it used to share data with third parties without the consumers’ effective consent. These cases highlight the FTC’s ongoing efforts to address what it considers to be unfair and deceptive practices in online contracts related to privacy.

Retroactive “Consent”

While U.S. courts haven’t definitively addressed the issue, the Federal Trade Commission (FTC) has actively challenged unfair and deceptive practices in online contracts. To constitute “consent” to a data collection or use practice, there must be conspicuous notice of the proposed term or condition and some action by the party being bound to unequivocally demonstrate assent to the new terms and conditions. Moreover, when terms and conditions are proposed to be changed, it must, at a minimum, be clear that the new terms and conditions apply to data collected both under the old terms and the new ones. Finally, a term in an online contract that simply says, “Continued use of this website constitutes consent to the changes,” may not be sufficient when the only way to see the new terms and conditions is to “use” the website.

The “best practice” for obtaining effective consent from consumers to a new use of their data is to provide clear, concise and understandable notice—with clear consent, e.g., “I understand the changes you have made and I agree to them.” However, if you condition continued use of a service on agreeing to a change in the privacy policy with respect to data already collected, you are telling your customers/clients that they only way they can do business with you is by agreeing to their data being used for AI in the future, but to data you have already collected being used for the new purpose.

Ethical Quandaries and the Path Forward

Beyond legal complexities, retroactive consent raises significant ethical concerns. It undermines user autonomy and trust, potentially exposing individuals to unintended data uses based on a past, arguably uninformed, agreement. In the dynamic digital landscape, the concept of “past consent” becomes questionable, particularly with evolving laws and regulations governing data privacy.

Companies Respond to AI

Artificial intelligence relies on large data sets (training sets) to “teach” the AI program. If you want an AI program to be able to tell the difference between a “dog” and a “wolf” you input thousands (or millions) of images of dogs and wolves and either let the AI figure out the difference (undirected training) or tell it which ones are dogs and which are wolves (directed AI) and let the program figure out the distinguishing features. Of course, the AI program determines that the main difference is that wolves are usually photographed outside and in the snow—so a white background is the leading indicator that the animal is a wolf. Nobody said it was perfect. Generative AI—those that mimic human writing—do so by “reading” large volumes of written work, detecting patterns and copying them. Chat GPT, the best-known version, stands for “Chat Generative Pre-Trained Transformer,” meaning that the data engine has already been “trained” on large data sets and can use its training to generate new information based on those data sets.

If an AI engine has “read” every mystery novel ever written, it “knows” how these plots generally unfold, the character archetypes, the cadence and tempo and language, and can then “generate” a passable mystery novel, replete with the wizened down-on-his-luck private investigator, the femme fatale and of course, the requisite ‘What a twist!’ ending. But to do so, it has to “read” a lot of novels. And that’s where privacy and legal issues start to creep in.

The Value of Data

Data—including but not limited to “personally identifiable information,” or PII—has long been valuable to Silicon Valley. Knowing and predicting consumer trends, purchasing preferences and needs are the principal drivers of technology and innovation. Data is why modern smart TVs cost only $500 and not $5,000—the TV is watching you as you watch it and is “selling” the data to others. Estimates of the value of personal data vary wildly because the “value” is not the same as the purchase price, nor is it the harm or injury resulting from its loss. The simple name of a user has limited value—fractions of a penny. But that same information, coupled with behavior patterns, education, history, contacts, etc., may have much greater value. This is why there is a huge market for this data. If I am selling a mid-sized SUV, it’s really helpful to have a list of people in my area in the market for such a vehicle who have a need (real or perceived) for the vehicle and who have sufficient disposable income to buy it. Companies like Amazon, Google and Meta are literally built on the scaffold of personal data.

Other data is similarly important—particularly to AI programs. AI-generated pictures are based on the hundreds of billions of pictures people post online. An AI program “knows” what the Chrysler building looks like because people have put pictures of the structure up online. It also knows what you look like (and who your friends are, etc.) because of the data you (or others) have posted. It knows how Mark Twain writes, and it knows that I sneak in a movie or song reference (usually from the 1970s or 1980s) into every article I write. AI programs are voracious consumers of data. That’s how they work.

Limits on Data Collection, Use and Dissemination

There is increasing litigation around the collection, dissemination and use of data for AI purposes. Entertainers have sued AI providers, alleging that the use of their posted copyrighted works for AI training constitutes a prohibited “derivative work” of their copyright. A recent case involved a Home Depot customer who sued the hardware giant for using their voice on phone calls to train an AI-powered voice response program. Companies need to revisit their data collection and use policies—but also their data collection and use programs—in light of the power of AI. Even if the company itself is not using or contemplating the use of AI itself, its vendors, suppliers and third parties may be mining the company data for AI training purposes. For example, third-party security vendors and managed services providers are using large data sets of access logs, routers, hubs and other data sources to profile potential hackers based on their “unusual activity” and often analyzing this data by reference to data from other sources. While this may be an appropriate use of log data, it is also collecting massive amounts of data concerning employees’ and users’ “ordinary” or “usual” activity and effectively sharing that data with others. Data collection and use policies need to be reexamined because of AI.

Contract and Deception

The FTC advisory, issued February 13, 2024, noted that “It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.” For now, just like consent to arbitration agreements, companies need to make sure that there are clear and conspicuous notices that data is being used for AI training, explaining what that means for data privacy and obtaining actual affirmative consent for such use. Maybe ChatGPT can write such a notice for you.

Image source: Jose Francisco Morales via Unsplash: https://unsplash.com/jose-francisco-morales-hKzmPs8Axh8-unsplash-e1678663324447.jpg

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/02/tegwar-ai-and-the-ftc-govt-agency-warns-of-deceptive-ai-contract-language/
如有侵权请联系:admin#unsafe.sh