From Chaos to Control: Five Guiding Principles for Building an Intelligent DataOps Culture
Douglas McDowell, Chief Strategy Officer at SentryOne, looks at how Intelligent DataOps can help businesses control, and make full use of their data.
Data is arguably among the most critical assets for any modern business, and without doubt is crucial to every application. In maximising the value of data and pursuing data-driven business culture, ‘DataOps’ is emerging as a method of empowering organisations to control data chaos and guide decision making.
Often miscategorised as “DevOps for data”, DataOps and DevOps share some common ground – particularly the collaboration required to improve process and outcomes. But while DevOps addresses the wider software development and operations lifecycle, a well-functioning DataOps culture empowers organisations to take control of their data estate, monetise it, and guide effective decision making at every level.
Taking the discipline a step further, intelligent DataOps – building the people, processes, and technology for a data-driven culture – is not just central to this process but key to helping improve the quality of life for data professionals.
Building a DataOps practice can, therefore, help organisations ensure they not only take control of their data, but optimise its use to vastly increase its role, impact and value. There are a number of guiding principles that can help organisations ensure they build an effective and sustainable approach.
Five steps to intelligent DataOps
Optimised observability
This is a process that starts by designing data application performance by default so it is optimised across the entire lifecycle. In doing so, development teams need to monitor and tune database applications during development and testing before being released to production. This requires more than the application of one direction oversight into the data pipeline – it depends on the effective use of intelligence gained from monitoring to inform performance and best practices (bi-directional integration).
What’s more, as data teams mature, they can create amplified value from Intelligent DataOps with an informal observability ‘contract’ by applying analytics to monitoring by default.
Effective process communication
Intelligent DataOps practices are observable as well: they are intuitive, standardised and transparent, but ensuring the quality and consistency of communication throughout the organisational process requires effort and commitment. Technology resources, in the form of collaboration software, reporting and analytics tools, for example, can also be applied to create observable processes that encourage engagement amongst teams.
Data testing
Every application is data-centric, but data also happens to be the most volatile component in every app development process. As a result, an application can never be considered as truly tested until it has been exposed to the wildest-possible datasets. Automated, integrated data testing addresses this common gap in many data pipelines to provide a form of data monitoring. This is vital for data science projects because, ultimately, it’s not possible to build and train a useful model based on bad data. As a result, any data science projects using untested data are, in effect, useless.
Data estate mapping
In a fully optimised DataOps environment, data underpins all key business decisions, with organisations bound by law to meet data privacy regulations. Ideally, therefore, all data is accounted for and has a home, which in turn needs a reliable map of where the data lives, where it originated, and where it ends up. Automated database documentation and data lineage analysis help data teams tick these boxes.
Relational data is easier to manage
Unstructured and NoSQL databases have risen in popularity but are not the best fit for all data. Relational database management systems (RDBMS) provide the structure required for continuous integration/continuous delivery (CI/CD) that is central to DevOps and DataOps. Continuous monitoring of RDBMS, with observability across the data environment, improves data delivery to stakeholders, end users, and customers.
These requirements exist because data is now primary business currency. But, transforming legacy approaches and processes to build a data-driven culture requires an honest assessment about the existing status of data. Key questions to ask include: can users get to the data they need? Is that data trustworthy? And is it delivered in time to support an effective DataOps culture?
As any organisation and their teams adopt DataOps and then progress to intelligent DataOps, they are likely to benefit from more effective alignment between their data teams and DevOps teams. This leads to a ‘new normal’ where the chaos that so often characterises and diminishes the role of data in today’s data-obsessed organisations is brought under control. By focusing on the people, processes, and technology surrounding any data estate, it becomes practical to build an intelligent DataOps ecosystem. A focus on intelligent DataOps brings data value to the forefront of business decision-making, forms the foundation of a data-driven culture, and promotes mutual collaboration between data and dev teams.