Making the Metaverse Safe For Everyone
Unlike any other time in history, the past decade has shown us the power of technology to transform our working and personal lives. Technology-enabled shopping, banking and working from any location made the restrictions from COVID-19 more manageable. We are also getting a hint of the power that big data, AI and machine learning will bring to our everyday lives through personalized virtual experiences, smart utilities and AI-driven health care.
This recent lifestyle transformation was driven, in part, using personal data and it is this information that will power the Metaverse. We know the privacy and integrity issues of having our personal data used in virtual environments: data breaches, identity theft, fraud and misappropriated inforamtion. In these situations, however, especially when it comes to social media, we as users likely consented to our information being used, stored or published at some time and in some way.
This is where the Metaverse differs. The Metaverse’s potential is endless. Along with the question of “Who will be in the metaverse?” consumers and businesses also should be asking, “How much of my identity do I want the metaverse to have?”
If the Metaverse starts with the right protection and permission models in place, we will have an easier and better onboarding experience and can keep our individual identities self-driven. This is because the Metaverse will only be as strong as the privacy and security models that it embodies and enforces.
It Starts With Consent and Ends With Trust
For consumers, the current model for data sharing is deceptively simple. You opt-in and your data will be used or stored. To give that consent, a consumer has to have some level of trust that the company, application or site will not use the information for reasons beyond what has been consented to. Most consumers are savvy to scams and know that extra security protections—such as two-factor authentication—help make their information safer. In some ways, the online experience has evolved so that security and consent are interchangeable. If a consumer is concerned about a businesses’ practices, they can pull their account information and trust that the company, application or site has not retained any unauthorized copies. The assumption—and in most cases, the practice—is that once a customer leaves, their data is no longer used. But what happens when the data ecosystem becomes more complex, like with the Metaverse?
Although we do not yet know what procedures will be in place, we can safely say that the current consent model will not work in the Metaverse. Asking for consent for each function—or throughout the metaverse evolution—will make the experience clunky and complicated (that is assuming that there will be consent levels based on data complexity) and lead to consent fatigue. And what happens to a person’s data when they want to opt-out or update it?
To use an analogy, think about how a consumer’s current online profile and experience is vastly different than what it was 10 years ago. Yes, technology has become more complicated, but along the way, people also made decisions about what their online personae would reflect. In the Metaverse, a consumer could risk their avatar being able to show the equivalent of their 2010 tweets, their online prom photos and the top posts from MySpace.
Why Current Practices Will Not Work in the Metaverse
Different governments have different data protection statutes. One example is personal data from Europe—which is under GDPR protection—being housed in the United States. At issue is the fact that data in the U.S. is transformed into unprotected cleartext when in use which is a violation of GDPR policy. (There are similar GDPR-type data protection laws active in different states, such as California, Colorado and Virginia.) In the current environment, unprotected cleartext is relied on during processing because the protected version of data can’t be processed or introduces errors. So, the EU and others take issue with this. If we apply the current system to the Metaverse, the outcome will be the same. What is not working in our physical world now will not work in a virtual world and will certainly fail in the Metaverse. It could lead to reputational harm, substantial fines and the loss of data privacy and security.
The Technology-Driven Solution
There are solutions to be found within newer technologies. First, there is blockchain, which offers a decentralized model of use and security. What makes it difficult for the metaverse is the fact that blockchain stores data on-chain, making it hard to manipulate and/or remove data from the blockchain. This limitation also means that blockchain can run into compliance issues and fail to meet privacy requirements. Another approach involves non-fungible tokens (NFTs), which have a level of security built-in. However, NFTs support single-purpose use and are not amenable to data sharing and combining—even if authorized and may be misappropriated by bad actors.
The solution for consent and protection can be found in GDPR pseudonymization, which means the processing of personal data in such a way that it cannot be tied to the data subject without additional information that is held separately and securely. This means it provides the ability to improve upon consent functions to improve privacy and security. A more accurate term is “statutory pseudonymization,” given that the GDPR statute has very specific requirements for pseudonymization which exceed those generally associated with places outside of Europe.
How Would it Work?
Statutory pseudonymization embeds controls that travel with the data in a way that keeps it usable but enables a strict layer of privacy in on-premises, hybrid cloud and global multi-cloud environments. This approach is designed to enable distributed data sharing, combining, analytics, artificial intelligence or machine learning. The key to statutory pseudonymization is that it goes beyond a one-environment solution and works across data silos.
The process is also complementary to advanced techniques like trusted execution environments (TEE), a secure enclave within cloud computer ecosystems, that extends the protection provided by encryption for data at rest and for data in transit to protect data when in use, with software embodying statutory pseudonymization. The result is that there is no reliance on unprotected/plain text data.
Statutory Pseudonymization Will Power Data Intermediaries
A data intermediary is a means by which people can authorize consent, determine incentives, and or receive third-party authentication so that data travels between humans and technology in a trusted manner. A data intermediary, driven by the controls in statutory pseudonymization, can provide a model for safe data use in the Metaverse. To put it in simple terms, a data intermediary would be a means of driving a digital ID, by which people can give detailed and real-time consent as to their data use and participation. In the Metaverse, the digital ID would be in use during each interaction and would provide a “code of permissions” for recurring use. The World Economic Forum recently published an in-depth look at digital agency and IDs.
When we look at the model above—the formation of a digital ID driven by a model of consent and security that is technologically protected, we have a good starting point for entering the Metaverse. We should all look at what our entry into the Metaverse will look like personally, including our shared identity, our data that can be used and what types of interactions we will allow. Similarly, the Metaverse should provide ways to slow down or stop experiences in real-time based on personal commands. Statutory pseudonymization can be the backbone that lets people participate in the Metaverse in a way that is safe and self-directed.
This article originally appeared in Security Boulevard. All trademarks are the property of their respective owners. All rights reserved by the respective owners.
CLICK TO VIEW CURRENT NEWS