Intel has released model legislation designed to inform policymakers and spur discussion on personal data privacy. Prompted by the rapid rise of new technologies like artificial intelligence (AI), Intel’s model bill is open for review and comment from privacy experts and the public on an interactive website.
The bill’s language and comments received should provide useful insight for those interested in meaningful data privacy legislation.
Intel’s model data privacy bill aims to bring together policymakers and others in a transparent and open process that helps drive the development of actual data privacy legislation. Intel has launched a website where interested parties can review and comment on the model bill. Company leaders believe input will help to promote the development of constructive data privacy legislation in Congress.
Data are the lifeblood for many critical new industries, including precision medicine, automated driving, workplace safety, smart cities and others. But the growing amount of personal data collected, sometimes without consumers’ awareness, raises serious privacy concerns.
People need assurances that information that is shared – both knowingly and unknowingly – will be used in beneficial, responsible ways, and that they will be appropriately protected. The U.S. needs a comprehensive federal law to create the framework in which companies can demonstrate responsible behavior.
Privacy is an important and ongoing issue in our data-centric world. In a white paper published last month, Intel’s Global Privacy team laid out six policy principles for safety and privacy in the age of AI, one of the technical domains that has significant privacy implications.
These principles were among the factors that influenced Intel’s draft legislation, including for new legislative and regulatory initiatives to be comprehensive, technology neutral and support the free flow of data. Organizations should embrace risk-based accountability approaches, putting in place technical or organizational measures to minimize privacy risks in AI. Automated decision-making should be fostered while augmenting it with safeguards to protect individuals.
Governments should promote access to data, supporting the creation of reliable datasets available to all, fostering incentives for data sharing, and promoting cultural diversity in data sets. Funding research in security is essential to protect privacy, and algorithms can help detect unintended discrimination and bias, identity theft and cyber threats.
The consumer privacy and data security program should be designed t consider and protect an individual’s privacy throughout the information life cycle; facilitate individuals’ control over their personal data and enable them to participate in decision-making regarding the processing of their personal data; while ensuring confidentiality, integrity, availability and security of personal data. It should also protect against unauthorized access, acquisition, disclosure, destruction, alteration, or use of personal data; while further protecting against reasonably anticipated threats and vulnerabilities to the security of personal data or to the legitimate privacy interests of individuals.
The program should be designed to identify, assess and mitigate privacy risk on an ongoing basis; prevent the use of personal data in any manner inconsistent with the original purpose for which that personal data was collected, unless subsequently permitted; and prevent the use or application of outputs from machine learning, algorithms, predictive analytics or similar analysis that would violate any state or federal law or regulation to wrongly discriminate against individuals or facilitate such discrimination, or deny any individual the exercise of any Constitutionally-protected right or privilege.