Artificial Intelligence (AI) has the potential to influence and transform entire industries and potentially all aspects of our lives.

The technology has a wide range of possible applications across various industries and domains. New applications and projects are made public daily, and the use of the new technology seems only to have one boundary: the limitations of human creativity.

One thing is for sure, AI workloads will be amongst the most critical workloads we use in healthcare, finance, lifestyle and many other areas. This, however, raises the question for organizations about how these important AI applications can be kept up and running without downtime – and how the underlying data can be kept secure without inhibiting their mobility.

Keeping AI data and workloads always on

To guarantee fail-safety and security against data loss and outages, many organizations rely on the tried and tested backup. This makes sense for data protection on a broad scale. However, backups are unsuitable for business continuity and disaster recovery, especially for the most critical data and workloads – like critical AI workloads.

Backups have the weakness that they only protect individual servers, but not complete applications. After restoring data from a backup, the applications must first be manually reassembled from their individual components. This costs time and is responsible for restoration times that can last for days or even weeks.

To guarantee the constant availability of critical AI applications, companies need more modern solutions that can guarantee fast recoverability. More and more companies are turning to DR solutions for faster recovery of their most critical data and workloads.

Currently, CDP (Continuous Data Protection) is the most effective recovery solution. With CDP, all data changes are recorded in a journal as they are written. CDP thus makes it possible to return to the status that existed only a few seconds before an attack or other disruption in a matter of seconds and without significant data loss.

Critical AI applications need the lowest possible RPOs and RTOs

To achieve the lowest possible RPOs and RTOs for those critical AI-applications, near-synchronous replication offers the best of both worlds: the high performance of synchronous replication without the high network or infrastructure requirements.

Near-synchronous replication is technically asynchronous replication but is similar to synchronous replication as data is written to multiple locations at the same time, but it allows for a small delay between the primary and secondary locations.

Near-synchronous replication is always-on and constantly replicating only the changed data to the recovery site within seconds. Because it is always on, it does not need to be scheduled, doesn’t use snapshots, writes to the source storage, and doesn’t have to wait for acknowledgment from the target storage.

One of the main advantages of near-synchronous replication is that it provides a high level of data availability and protection, while still allowing for faster write speeds than synchronous replication. This makes it a good choice for workloads like critical AI-applications with high write loads and/or large amounts of data.

Data mobility for AI data creates huge challenges for IT infrastructure

AI runs on data. The scale of AI data is nothing less than a completely new era of data creation and the scale and volume are exponentially larger than anything IT has seen before. Even simple applications will use exabytes of raw data that need to be prepared for model training and subsequent inference.

The data sets are often created on the edge and need to be transferred for processing into a central data repository. And at the end of the AI data lifecycle, the data needs to be archived for potential re-training.

All of this creates completely new challenges for IT infrastructure and management as these huge amounts of data need to be able to be moved continuously.

Lifting and shifting these huge data sets will not be possible with the current network technology and data management solutions based on synchronous replication.

To still be able to move AI data with limited processing power and bandwidth, asynchronous replication will have to be used. This ensures continuous replication with low bandwidth on a block level that does not produce high peaks of data transfers.

Conclusion: The future of AI needs CDP and near-synchronous replication

AI can create a house music track in seconds and paint a picture of you in van Gogh style. But many current and future AI applications also have the potential to improve humanity at large.

AI will very soon be able to help us diagnose diseases, detect cancer cells, drive vehicles autonomously, solve traffic congestion, translate multiple languages, optimize energy consumption, detect crop disease, or monitor the climate as well as air and water quality – amongst many other amazing use cases.

These applications will be critical for humans, and they need to be protected with the best possible available solutions – like Continuous Data Protection. At the same time, the scale of AI creates huge challenges for current IT infrastructure to save, manage and move the enormous amounts of data.

Current technologies will not be able to provide the needed data mobility for the massive scale of AI data sets. To be able to manage AI data successfully, new data-mobility solutions will need to be adopted.