Long-Term Voyage Decision Making for Crewless Platforms
Long-Term Voyage Decision Making for Crewless Platforms

The desire to operate crewless platforms for months autonomously requires platforms that can sense their current state, maintain themselves, and perform long-term mission planning tasks to optimize their effectiveness as they degrade. How these long-term tasks are currently organized, exe- cuted, and regulated on human-crewed vessels is unexplored compared to more immediate navigation and hazard avoidance. This paper presents a series of inter-related explorations of the issues of longer- term mission planning in a fully autonomous framework. Based on the current state-of-the-art, a new three-component ranking scale for crewless platforms is proposed. Semi-structured interviews with retired military crews and commercial mariners were used to identify what planning tasks crews were currently carrying out. At the end of the interview process, themes from all interviews were reviewed, and an affinity diagram was created from the themes. The interviews revealed a surprising diversity in approaches, especially for tasks beyond machinery health assessment. Complementing this bottom-up analysis, a top-down analysis via a modified STAMP/STPA framework identifies critical information paths and control structures surrounding these tasks. By integrating the results of these two comple- mentary analyses, gaps in our current ability to achieve long-term autonomous operations are identified. Three proposed demonstration cases are developed to help develop approaches to fill these gaps.

Data-driven models for vessel motion prediction and the benefits of physics-based information

Machine learning approaches, onboard measurements, and widely available wave forecast and hindcast data present an opportunity to develop predictive models for vessel motion forecasting. Detailed vessel motion forecasts would support underway and deployment decisions for safer and more efficient vessel operation. To demonstrate this application, ridge regression and neural network models for heave, pitch, and roll prediction were trained and tested using time-and-place specific, multidirectional wave model parameters as input. Additionally, the performance benefits of providing these predictive models with computationally efficient, physics-based model predictions (PBMPs) of heave, pitch, and roll as additional inputs were examined. Data from approximately 13,500 30-minute windows, measured aboard an operational research vessel, were used to train and test the data-driven models. Data from over 2,500 additional 30-minute windows, measured aboard a sister vessel, were also used to test the versatility of the trained models. The results of this study showed effective reduction of motion amplitude mean-squared error (MSE) values on multiple test datasets relative to the PBMPs alone. The results also showed that inclusion of PBMPs as input to the data-driven models was typically beneficial in terms of MSE reduction, stressing the importance of retaining physics-based information in data-driven models.

A Generic Framework for Data Model Fusion

This report was prepared for the Office of Naval Research in fulfillment of Task 1 of contract N00014-20-C-1099, titled “Data-Model Fusion for Naval Platforms and Systems.” The objective of Task 1 is to develop a rigorous, generic framework for naval applications of Data-Model Fusion. Data-Model Fusion (DMF) is a concept developed at Martin Defense Group (legacy Navatek), in conjunction with the University of Michigan, to describe: 1. Data: information from sensors, expert knowledge, reports, inspections, surveys, or other sources regarding physical components, systems, platforms or fleets within available operating conditions 2. Model: digital representations (e.g., empirical equations, physics-based models, networks, ontological characterizations, etc) of those components, systems, platforms or fleets within simulated operating conditions. 3. Fusion: the integration of said data and models to bring both into agreement. Our data- model fusion approach uses data science techniques and machine learning methods to improve state estimates, update model parameters, identify operational areas or anomalies, and inform decision-making. Our integration approach utilizes and expands upon the state-of-the-art in data science and Artificial Intelligence (AI)-based decision support to provide real-time actionable diagnostic and/or prognostic information on the state of real-world physical platforms. When a digital model is operationally coupled via sensors to a specific real-world component, system, platform, or fleet, we refer it as a digital twin. Use cases for twins involve managing degrading systems, improving performance, updating design approaches, and optimal planning for a fleet of similar platforms. Digital twins are not a necessary component in data-model fusion, but they are frequently used as a basis for analysis and decision-making in our real-world system applications. This report is a compilation of six separate reports, with an overall focus on defining a fundamental framework for data-model fusion in the naval domain. We start in Chapter 1 with a literature survey of approaches, gaps, and opportunities in data-model fusion. Next, we define in Chapter 2 a unified theory of digital twins, followed in Chapter 3 by a delineation of digital twin types in the naval domain. In Chapter 4 we discuss techniques for data persistence that enable storage of the geometry models, measurements, and environmental data needed by twins. In Chapter 5 we shift our focus back to data-model fusion, providing a survey of tools and techniques that can be used to inform naval systems design and operation. We develop methods for understanding and managing the implications, risks, and opportunities of digital naval engineering with respect to the design and operation of autonomous naval platforms and systems. Chapter 6 was written to serve as a primer on AI-based decision support methodologies, tools, and techniques for practicing naval research engineers and scientists. Finally, we conclude this report with a discussion on technology transfer, capability gaps, and opportunities for further research.

Estimating extreme characteristics of stochastic non-linear systems

The nature of the ocean is random and, as a result, marine structures face extreme, non-linear load effects. Estimating extreme responses of said structures often reduces to purely probabilistic approaches involve a considerable amount of conjecture. A method is developed in this paper to conserve original system information without the need for high fidelity Monte Carlo Simulations. Here, an iterative linearization of a non-linear system is developed to estimate extremes for the non-linear system. The Matched Upcrossing Equivalent Linear System (MUELS) method finds linear systems with zero-upcrossing periods equivalent to that of the non-linear system under a specified forcing spectrum. These linear systems are used as input into the Design Loads Generator, where ensembles of extreme linear realizations at the exposure period of interest, and the input time series that lead to those extremes, are generated. These input time series are valid inputs into the non-linear system and create a set of non-linear realizations that approximate the set of non-linear extremes at the exposure period of interest. As an example, this paper introduces and applies this method to the Duffing Oscillator at various levels of increasing cubic stiffness, forced by an ITTC wave spectrum, for exposure periods as long as 10textasciicircum 8 hours representing O(10textasciicircum10) zero-crossing maxima. Quantitative comparisons are made with Monte Carlo simulations, GEVD estimates, and expected values of time series near extreme local maxima.

Prediction of Human Injury Due to Impact

Predicting rare events of a non-linear process can prove to be a difficult challenge. In this paper, the linearization of a non-linear process, namely impact loading, was completed by performing a gaussianization of a sample non-linear, non-gaussian time series to predict the probability of human injury in a specified sea state. The gaussianized time series was then input into the Design Loads Generator (DLG) to estimate a lower bound to the non-gaussian extrema as well as providing an ensemble of extreme time series of the gaussianized process. The DLG takes an input spectrum, transfer function, and associated exposure time to optimize phase sets of a modified gaussian distribution such that the extreme realizations with return periods equal to the exposure time of the desired process, as well as the input that leads to those realizations, can be generated. In the present application, the input to the corresponding gaussian extreme time series was used as input in the non-linear, non-gaussian model to estimate the extremes of the non-linear process, conditioned on the gaussian process being extreme. The process of gaussianizing the non-linear time series, entering into the DLG, and using the resulting input time series as input into the non-linear model was iterated to further develop the conditional extreme pdf. The relationship between these conditional, developed extrema and the observed extrema of the process was applied to determine the probability of human injury. Here, injury is assumed to be related to two possibly correlated random variables: the magnitude and duration of an extreme acceleration event.

Significance of wave data source selection for vessel response prediction and fatigue damage estimation

The availability of detailed environmental hindcast data opens the door for virtual structural health monitoring; however, the impact of hindcast wave data selection on the results of such an approach have not been explored. While studies on the differences between wave models have been conducted in the past, extensions of these comparisons to resultant vessel response predictions and fatigue damage estimates are limited. At three separate geographical locations, this work compared hindcast wave data from NOAA’s WAVEWATCH III (NWW3) Multigrid Production Hindcast, the EU’s CMEMS Global Ocean Waves Analysis and Forecasting Product, and National Data Buoy Center buoys. In addition to comparisons of wave parameters at each location, the resultant heave, pitch, and vertical bending moment responses and fatigue damage of a destroyer-sized naval combatant, the DTMB 5415, were compared. The novelty of this work lies in its large scope, which calculated responses every 3 h for all of 2017 in 32 speed and heading combinations for each location and wave data source. The results show that differences between wave data sources propagated to the vessel response predictions. These differences were then amplified in the calculation of fatigue damage causing significant discrepancies between data sources after just one year.

Combined stochastic lateral and in-plane loading of a stiffened ship panel leading to collapse

With harshening ocean environments and the push to extend the service life of vessels and platforms, lifetime structural design performance is an increasingly important factor to differentiate between design options. But despite many advancements in structural modeling, reliability analysis may still be too time consuming and computationally expensive to be employed for local design choices. Reliability analysis for stiffened ship panels is additionally complicated as the collapse mechanism is due to combined stochastic lateral and in-plane loading effects [1]. This paper employs the non-linear Design Loads Generator (NL-DLG) process [2] to efficiently estimate the probability of stiffened ship panel collapse while retaining the wave profiles which lead to lifetime responses. This information allows a direct comparison between panel options and highlights critical panel parameters which are strongly related to design robustness. The resulting ensemble of short NL-DLG wave profiles lead to similar statistical characteristics of the panel designs as do brute-force Monte Carlo Simulations, but reduce the required simulation time by a factor of nearly 87,000 for the same number of simulations. These directed wave profiles offer naval architects the opportunity to efficiently examine rare complex structural responses by linking high-fidelity structural and hydrodynamic models. The potential of the NL-DLG process is that relevant information about complex marine structures, even those with limit surfaces excited by combined non-Gaussian loading, can be obtained efficiently and in earlier stages of the design process via low-order surrogate models.

Codesign case study of a planing craft with active control systems

This case study presents a novel insight into the design of a codesigned planing craft with an active control system (ACS), along with its potential advantages and disadvantages when compared with a traditionally designed vessel (i.e., a vessel whose geometry is first selected, and then its ACS is implemented). This work has three purposes:present tools a designer can use to codesign a planing craft with its ACS,use these tools to expand the design space and further explore the potential of codesign, andinvestigate the feasibility of having a planing craft with ACS designed to the codesign results found in 2) and compare it with a traditionally designed vessel.The vessel particulars that are numerically optimized are the beam, dead rise, longitudinal center of gravity (lcg), and two tuning parameters for the ACS’s linear quadratic regulator. In the case study, the codesigned vessel had 4% lower drag at the design speed and Sea State (SS) 3, but on lower SS’s it had drag savings of 10% and seakeeping improvements of around 40% for the investigated seakeeping metric. The case study suggests that although the codesigned vessel is technically feasible, it would require unconventional hull/deck design—a result which emphasizes the importance of considering the coupling between a planing craft and its ACS early in the concept design.In search for a better performing planing craft, a naval architect could consider using an active control system (ACS) on their designs. Although they will encounter published research confirming performance improvements when an ACS is used (Wang 1985; Savitsky 2003; Xi & Sun 2006; Kays et al. 2009; Engle et al. 2011; Hughes & Weems 2011; Rijkens et al. 2011; Shimozono & Kays 2011; Rijkens 2013), literature addressing the concept design process of a planing craft that will have an ACS is, to the best of the authors’ knowledge, limited only to the previous work by the authors (Castro-Feliciano et al. 2016, 2018). The work by Castro-Feliciano et al. (2016, 2018) suggests that the benefit from codesigning (as opposed to sequentially designing the vessel geometry and later adding an ACS) can be significant and should be the design methodology followed when designing a planing craft that will have an ACS.

Ship motion and fatigue damage estimation via a digital twin

ABSTRACT A digital twin is a dynamic virtual representation of a physical system that may be used to provide support for life-cycle management decisions. Fundamental exploration of algorithms and approaches to link, compare, and fuse numerical models used in digital twins with real-world sensor data is still needed for ship performance prediction, condition assessment, and ultimately, significant implementation of digital twin technology in the marine industry. In this work, a preliminary digital twin framework for surface ships has been developed to yield time-and-place specific predictions of vessel motions and structural responses given weather forecast or hindcast data for a selected route. Cumulative fatigue damage was predicted and compared for four simulated routes in the Pacific Ocean, and the predicted motions for the simulation with the greatest fatigue damage were analyzed to investigate the possible causes of this increased damage. The notable increase in cumulative damage seen for this route stresses the importance of the ability to track and balance fatigue damage among ships in a fleet. Moreover, this information would further support educated maintenance and deployment scheduling decisions to ensure fatigue damage equality among ships in a fleet, while real-time implementation of this digital twin technology would furnish operators with a greater understanding of a weather forecast’s implications, and thereby provide in-mission decision support.

A dynamic discretization method for reliability inference in Dynamic Bayesian Networks

The material and modeling parameters that drive structural reliability analysis for marine structures are subject to a significant uncertainty. This is especially true when time-dependent degradation mechanisms such as structural fatigue cracking are considered. Through inspection and monitoring, information such as crack location and size can be obtained to improve these parameters and the corresponding reliability estimates. Dynamic Bayesian Networks (DBNs) are a powerful and flexible tool to model dynamic system behavior and update reliability and uncertainty analysis with life cycle data for problems such as fatigue cracking. However, a central challenge in using DBNs is the need to discretize certain types of continuous random variables to perform network inference while still accurately tracking low-probability failure events. Most existing discretization methods focus on getting the overall shape of the distribution correct, with less emphasis on the tail region. Therefore, a novel scheme is presented specifically to estimate the likelihood of low-probability failure events. The scheme is an iterative algorithm which dynamically partitions the discretization intervals at each iteration. Through applications to two stochastic crack-growth example problems, the algorithm is shown to be robust and accurate. Comparisons are presented between the proposed approach and existing methods for the discretization problem.

Updating structural engineering models with in-service data: approaches and implications for the naval community

Despite growing computational and sensing power, naval structural design and analysis remains focused minimizing structural system weight while ensuring that estimated stress levels remain below allowable thresholds. While this approach allows the rapid design of new vessels that meet existing structural requirements, it struggles to fulfill the U.S. Navy’s growing need for lifecycle support of aging assets. To address these new tasks, extensions of current structural analysis and design tools are required. This paper presents a comprehensive framework for managing fatigue cracks on aging assets by extending traditional design-stage approaches with Bayesian network-based model updating. A methodology of updating structural load estimates that is able to account for changing operational profiles is presented, allowing future fatigue loading to be predicted with increased confidence. A Dynamic Bayesian Network (DBN) approach is taken to represent the time-varying growth of fatigue cracks, including both shipboard inspection data and the updated loading. A robust reliability formulation is used to predict future crack growth risks based on the DBN formulation and the inspection data-to-date. An example is presented for a hypothetical sealift vessel. Finally, a discussion of the implications of such models on Navy design practice, operations, and maintenance is presented.

Stochastic nonlinear fatigue crack growth predictions for simple specimens subject to representative ship structural loading sequences

Recent work by the authors investigated an extension of the finite element analysis of plasticity-induced crack closure to non-stationary, ship structural loading sequences by taking advantage of their inherent time-dependent nature in which the larger loading cycles tend to be clustered together. In doing so, first-order load interactions are presumed to arise from the random occurrence and severity of physical storms encountered by ships and offshore structures throughout their service lives. This material hysteresis is captured through a time-dependent crack “opening” level ( K op ) which is based on the evolution of a rate-independent, incremental plasticity model simulating combined nonlinear kinematic and isotropic hardening. The result is a mechanistic rather than phenomenological numerical model requiring only experimentally measured fatigue crack growth rates under constant amplitude, cyclic loading (e.g., ASTM E647-13) and a full material constitutive model defined through experimental push–pull tests for the same material. This approach permits a consideration of material behaviors which are physically relevant to structural steels, yet necessarily omitted in the similar application of a strip-yield model. The present paper generalizes the model originally proposed by the authors to now consider arbitrary storm model loading sequences taken from high-fidelity, time-domain seakeeping codes. To predict the fatigue fracture induced by variable amplitude stress records with upwards of 5 × 10 6 time-dependent cycles, a consistent modeling reduction is applied based on the Ordered Overall Range (OOR) or racetrack counting method. The resultant crack growth behavior is demonstrated to converge remarkably well for sufficiently small refined mesh sizes. Using this model, and by considering different arrangements of the same stress record, the importance of nonlinearities (i.e., those associated with ship response as well as material hysteresis) are emphasized.

Improving surrogate-assisted variable fidelity multi-objective optimization using a clustering algorithm

Surrogate-assisted evolutionary optimization has proved to be effective in reducing optimization time, as surrogates, or meta-models can approximate expensive fitness functions in the optimization run. While this is a successful strategy to improve optimization efficiency, challenges arise when constructing surrogate models in higher dimensional function space, where the trade space between multiple conflicting objectives is increasingly complex. This complexity makes it difficult to ensure the accuracy of the surrogates. In this article, a new surrogate management strategy is presented to address this problem. A k-means clustering algorithm is employed to partition model data into local surrogate models. The variable fidelity optimization scheme proposed in the author’s previous work is revised to incorporate this clustering algorithm for surrogate model construction. The applicability of the proposed algorithm is illustrated on six standard test problems. The presented algorithm is also examined in a three-objective stiffened panel optimization design problem to show its superiority in surrogate-assisted multi-objective optimization in higher dimensional objective function space. Performance metrics show that the proposed surrogate handling strategy clearly outperforms the single surrogate strategy as the surrogate size increases.

Testing of a spreading mechanism to promote diversity in multi-objective particle swarm optimization

The design of many real-life engineering systems involves optimization according to multiple, often conflicting, objectives. In this paper, an algorithm called spreading multi-objective particle swarm optimizer (SMOPSO) is developed and tested for optimization problems with two objectives. The motivation for SMOPSO is to promote a high diversity of solutions found in two-objective particle swarm optimization. This is attempted through the use of a spreading function based on neighboring particle positions and an archive controller which discriminates based on particle spacing. The spreading function directs non-dominated particles away from their nearest neighbor, aiming for evenly-spaced solutions as particles “spread out”. To test if such an approach can indeed improve Pareto front diversity, a performance comparison of SMOPSO is made to two benchmark algorithms. Preliminary results suggest the proposed algorithm may improve the diversity of solutions for a limited selection of optimization problems, but at the expense of other important measures of performance which is discussed in this paper. SMOPSO’s performance degrades for more difficult optimization problems, such those with multiple fronts and narrow global minima. An example application of SMOPSO to a theoretical, two-objective high-speed planing craft design problem is also given.