Mobility, cloud and Big Data all promise to help enterprises increase efficiency and productivity, improve decision-making and lower costs. The laudable goal is to make business more competitive, but for IT, legal and compliance teams, these new technologies often lead to increased complexity, loss of control and even increased costs as massive amounts of data now move to an ever-increasing number of endpoints, including mobile devices and third-party hosting services. These challenges can be overcome with a new approach to standardising information metadata.
If IT doesn’t fully understand what data exists and where various types of information are located, then it can’t ensure that the right people have the right access at the right time, and it certainly can’t adequately secure the data against breaches and theft, or delete private information as required by new privacy laws. E-discovery costs can skyrocket as the amount of data that needs to be collected increases. Even business users can suffer as the information they need for daily activities and the data they want to use for Big Data analytics become harder to find and control, leading to lower productivity and redundant effort, while undercutting the hoped-for improvements in decision-making.
To maintain control over their burgeoning data stores, organisations need to develop insight across all data, no matter who creates it, where it lives, and with whom it’s shared. Unfortunately, most companies see this as a hugely expensive and disruptive challenge. However, there is actually a very simple and cost-effective way of doing this, as long as there is the willingness to do it over time, which is still far better than not doing it at all.
The strategy is based on applying the same metadata standardisation typically used on structured databases to all other data across the enterprise, on-premise and in the cloud, including all message types – email, text and SMS messaging, social media, etc – documents (word processing, spreadsheets, presentations, etc.), and even log files. In some regulated industries, such as financial services, metadata standardisation could also be applied to voice communications data, such as recorded conversations and voicemail files.
Let’s say there is a master ‘worker’ ID database – employees and onboarded external personnel, for instance. Using this ID to tag every document, message and database record with who created it, who revised it and who deleted it would make it possible at various stages in a range of business processes to relate data back to particular people, no matter whether the data makes its way onto cloud storage or takes a number of trips from mobile device to mobile device. Just this one step could also help make e-discovery processes more efficient and facilitate data protection and privacy efforts. It would also then be possible to identify the complete “data footprint” of every individual across all data sources – applications, shared services, on-premises and cloud.
Let’s look at another important use case. The migration of data beyond the firewall has exacerbated what was already a major challenge for CIOs: distinguishing valuable information from the approximately 75 percent of enterprise data in any organisation that is useless debris. If one wants to manage data regardless of where it is, and if the goal is to get rid of data centres and efficiently move data to the cloud, then it’s absolutely vital that existing data is identified, what’s important, and what lacks any value. Applying standardised metadata to all enterprise data can dramatically improve the identification of key data in conjunction with business, legal, records, compliance and security value to begin to shine a light on the firm’s dark data.
The number of tags one needs to use to dramatically improve data management and support initiatives around aspects such as e-discovery, regulatory compliance, data debris disposal, and cybersecurity and threat response is not at all insurmountable. As noted above, using employee ID, client ID and product ID might be a great place to start. The key is to establish enough tags to be useful, but not so many that it becomes burdensome to apply them to all types of data in all locations the firm can influence or control.
Also, standardisation should be applied over time, evolving systems and user behaviour, not disrupting them. One strategy is to evolve with the natural life cycle of IT. Each time an application is changed, platform or server, the standardisation of embedded metadata is required. Eventually the use of standardised metadata should become habitual, systematic and pervasive.
With a disciplined approach to metadata standardisation, companies can be prepared to more effectively take advantage of new mobility, cloud and Big Data opportunities.