The world is not short on ocean data—it’s short on usable knowledge.
Across governments, industries, and research institutions, billions are spent every year collecting ocean data. Monitoring programs expand, environmental assessments multiply, and scientific studies continue to grow in scale and sophistication.
And yet, the same frustration persists.
Decision-makers—whether in government, offshore energy, fisheries, or conservation—still struggle to access timely, reliable answers. The ocean is changing quickly, but the systems meant to understand it are not keeping pace.
This disconnect is becoming more consequential. Climate instability, biodiversity loss, pollution, and rapid expansion of ocean industries are converging in the same spaces. At the same time, the push to grow the blue economy is accelerating, often without the clarity needed to ensure those decisions are truly sustainable.
So the issue is no longer about collecting more data.
It’s about why we are not getting more value from what we already spend.
What this study actually does
This work steps back from individual datasets and asks a more fundamental question: how does ocean knowledge actually move from observation to decision?
Drawing on decades of experience, extensive literature review, and thousands of conversations with scientists, policymakers, industry leaders, and rightsholders, it maps the full lifecycle of ocean data. From the moment observations are collected, through processing and analysis, to publication and eventual use in decision-making, a consistent pattern emerges.
Data are not just collected in isolation. They are processed, analyzed, and published in isolation too.
Each project builds its own workflows. Teams independently clean, structure, and analyze their data. Reports and papers are produced to answer specific questions, then remain fixed in time—even as new data continue to be generated.
What results is not a cumulative system of knowledge, but a series of disconnected efforts.
In response, the study introduces a different model: a continuous knowledge-flow system, where data collection, analysis, and publication are integrated, and where outputs evolve as new information becomes available. This model is implemented through eOceans.
What the study reveals
The most striking finding is not a lack of data, but a profound inefficiency in how it is handled.
Globally, it is estimated that up to $291 billion is lost each year—not because data are missing, but because they are underused, duplicated, or locked into systems that prevent them from being fully leveraged.
Much of this loss occurs in places that are rarely discussed. Data often sit in silos, inaccessible or difficult to interpret. Even when made “open,” they can remain effectively unusable due to issues with standardization, discoverability, and trust.
But the largest inefficiency lies in processing.
Across the world, scientists, analysts, consultants, and technical teams are repeatedly performing the same complex tasks—cleaning data, standardizing formats, integrating datasets, running analyses, and producing outputs. These steps are essential, but they are also highly repetitive. In many cases, they consume the majority of a project’s time and resources.
And yet, they are rarely designed to be reused.
At the same time, the outputs of this work—papers, reports, datasets—remain static. They reflect what was known at a particular moment, even as new data are already being collected. Updating them is difficult, often discouraged by publication systems, and rarely prioritized.
The result is a system where knowledge struggles to keep up with reality.
Why this matters for real-world decisions
This inefficiency is not abstract—it directly affects how decisions are made.
For governments and regulators, it slows approvals and weakens confidence in environmental assessments. For offshore energy and infrastructure, it introduces uncertainty, extends timelines, and increases costs. For fisheries and conservation, it limits the ability to detect changes early and adapt management strategies in time.
For investors, it obscures risk and complicates efforts to assess environmental and social performance. And for communities—including Indigenous and coastal groups—it often means that their knowledge is either excluded or processed externally, at significant cost and with limited control.
Across all of these contexts, the same issue emerges: decisions are being made without fully leveraging the data that already exist.
The gap no one talks about
The dominant narrative suggests that better decisions require more data. But this study points to a different constraint.
The real limitation is not volume—it is flow.
Today’s systems are built around projects, not continuity. They are siloed rather than integrated, manual rather than automated, and static rather than adaptive. Data do not accumulate into a shared, evolving understanding. Instead, they are repeatedly reprocessed, repackaged, and locked into outputs that cannot change.
This has a human cost as well. Highly trained experts spend much of their time on complex but repetitive tasks, rather than on interpretation, synthesis, and decision support—where their expertise has the greatest value.
What changes now
A different model is now within reach—one designed for a world where both data and change are continuous.
In a knowledge-flow system, data are structured at the moment they are collected. Processing and analysis are not repeated from scratch, but built to be reusable and scalable. Datasets can be integrated across projects and sectors, and outputs are no longer fixed—they update as new data are added.
This transforms knowledge from something static into something living.
Instead of producing reports that are outdated on arrival, we move toward systems where insights remain current, transparent, and accessible. Contributors can see how their data are used. Decision-makers can act on up-to-date information. And collaboration becomes easier, because everyone is working from a shared, evolving foundation.
Platforms like eOceans were built to enable this shift—connecting observation, analysis, and dissemination into a single, continuous system.
The impact is not just efficiency. It is the ability to make better decisions, faster, with greater confidence and accountability.
Frequently asked questions
Do we need more investment in ocean data? In many cases, no. This work shows that substantial value can be unlocked by improving how existing data are processed, shared, and reused.
Why is so much value being lost? Because data are siloed, processing is duplicated across projects, and outputs are static, preventing reuse and continuous updating.
Why is data processing such a bottleneck? It is complex, time-consuming, and repeated across teams. In many projects, it consumes the majority of time and resources.
Why are scientific publications often outdated? They are designed as fixed outputs, even though new data may already exist or continue to be collected after publication.
How does this affect environmental decisions? It slows timelines, increases uncertainty, and can lead to decisions based on incomplete or outdated information.
What is a knowledge-flow system? It is a system where data continuously move from collection to analysis to application, with outputs that evolve over time rather than remaining static.
Final thought
We are not lacking data. We are working within systems that prevent that data from working together. Fix that—and we don’t just recover lost value. We create a fundamentally new way of understanding and managing the ocean—one that moves at the speed the world now demands.
Read the full study
Published in IEEE: Unlocking $291 Billion for the Ocean: eOceans Integrates Research with Dissemination to Maximize Impact (Ward-Paige, 2024)