Features, Insight, Opinion

DataOps in the real-world

By Douglas McDowell, General Manager, Database, SolarWinds

Data can be a company’s most valued asset—it can even be more valuable than the company itself. But if the data is inaccurate or constantly delayed because of delivery problems, a business can’t use it to make well-informed decisions.

Having a solid understanding of a company’s data assets isn’t easy. Environments are changing and becoming increasingly complex. Tracking the origin of a dataset, analysing its dependencies, and keeping documentation up to date are all resource-intensive responsibilities.

This is where DataOps comes in. DataOps—not to be confused with its cousin, DevOps—began as a series of best practices for data analytics. Over time, it evolved into a fully formed practice all on its own. Here’s its promise: DataOps helps accelerate the data lifecycle, from the development of data-centric applications up to delivering accurate business-critical information to end users and customers.

DataOps came about because there were inefficiencies within the data estate at most companies. Various IT silos weren’t communicating effectively (if they communicated at all). The tooling built for one team—that used the data for a specific task—often kept a different team from gaining visibility. Data source integration was haphazard, manual, and often problematic. The sad result: The quality and value of the information delivered to end users were below expectations or outright inaccurate.

While DataOps offers a solution, those in the C-suite may worry it could be high on promises and low on value. It can seem like a risk to upset processes already in place. Do the benefits outweigh the inconvenience of defining, implementing, and adopting new processes? In my own organisational debates I have on the topic, I often cite and reference the Rule of Ten: It costs ten times as much to complete a job when data is flawed than when the information is good. Using that argument, DataOps is vital and well worth the effort.

You May Already Use DataOps—But Not Know It

In broad terms, DataOps improves communication among data stakeholders. It rids companies of its burgeoning data silos. DataOps isn’t something new. Many agile companies already practice DataOps constructs, but they may not use the term or be aware of it.

DataOps can be transformative, but like any great framework, achieving success requires a few ground rules. Here are the top three real-world must-haves for effective DataOps.

  1. Commit to Observability

Observability is fundamental to the entire DataOps process. It gives companies a bird’s-eye view across their continuous integration and continuous delivery (CI/CD) pipelines. Without observability, your company can’t safely automate or employ continuous delivery.

In a skilled DevOps environment, observability systems provide that holistic view—and that view must be accessible across departments and incorporated into those CI/CD workflows. When you commit to observability, you position it to the left of your data pipeline—monitoring and tuning your systems of communication before data enters production. You should begin this process when designing your database and observe your nonproduction systems, along with the different consumers of that data. In doing this, you can see how well apps interact with your data—before the database moves into production.

Monitoring tools can help you stay better informed and perform more diagnostics. In turn, your troubleshooting recommendations will improve and help fix errors before they grow into issues. Monitoring gives data pros context. But remember to abide by the “Hippocratic Oath” of Monitoring: First, do no harm.

If your monitoring creates so much overhead that your performance is reduced, you’ve crossed a line. Ensure your overhead is low, especially when adding observability. When data monitoring is viewed as the foundation of observability, data pros can ensure operations proceed as expected.

  1. Map Your Data Estate

You must know your schemas and your data.

First, document your overall data estate to understand changes and their impact. As database schemas change, you need to gauge their effects on applications and other databases. This impact analysis is only possible if you know where your data comes from and where it’s going.

Beyond database schema and code changes, you must control data privacy and compliance with a full view of data lineage. Tag the location and type of data, especially personally identifiable information (PII)—know where all your data lives and everywhere it goes. Where is sensitive information stored? What other apps and reports does that data flow across? Who can access it across each of those systems?

  1. Automate Data Testing

The widespread adoption of DevOps has brought about a common culture of unit testing for code and applications. Often overlooked is the testing of the data itself, its quality, and how it works (or doesn’t) with code and applications. Effective data testing requires automation. It also requires constant testing with your newest data. New data isn’t tried and true, it’s volatile.

To assure you have the most stable system available, test using the most volatile data you have. Break things early. Otherwise, you’ll push inefficient routines and processes into production, and you’ll get a nasty surprise when it comes to costs.

The product you use to test that data—whether it’s third-party or you’re writing your scripts on your own—needs to be solid, and it needs to be part of your automated test and build process. As the data moves through the CI/CD pipeline, you should perform quality, access, and performance tests. In short, you want to understand what you have before you use it.

DataOps is vital to becoming a data business. It’s the ground floor of data transformation. These three must-haves will allow you to know what you already have and what you need to reach the next level.

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines