Striking a balance between risk and innovation: Lessons from an autonomous ship

Back story to a different kind of voyage

I wasn’t sure what to expect when I turned up for an event at the Historic Dockyards in Portsmouth, UK. The planned star of the show – an unmanned ship called Mayflower 400 – couldn’t actually be there because of bad weather. I confess to thinking that this didn’t bode well for the future of maritime automation. I later found out, however, that the issue was not with the autonomous craft itself, but the manned vessel that needed to accompany it while operating close to the shore.

More of that later. In the meantime, let’s quickly review some of the background to what’s known as the Mayflower Autonomous Ship (MAS) project. The ‘400’ part of the vessel’s full name is there as a tribute to the original Mayflower that transported pilgrims from England to the ‘New World’ a little over four centuries ago. In the words of the man who originally conceived the MAS idea, Brett Phaneuf: “When I heard the suggestion that a replica of the original Mayflower should be built to commemorate the 400th anniversary of that historic voyage, I thought it was a missed opportunity”. His idea was to do something that looked forward rather than back, that in 400 years’ time, people might regard as an historic turning point in its own right – an unmanned vessel that could cross the Atlantic without a human crew.

To bring this idea to life, Phaneuf got his innovative maritime engineering team at Submergence together with telemetry and AI specialist Andy Stanford-Clark and some of the other folks from IBM Research. The child of this collaboration is a vessel packed with advanced communication, navigation and automation technology that’s capable of operating safely at sea without a crew. Sensors are used to track everything from changing weather conditions, through other vessels and objects that might be encountered, to curious wildlife such as dolphins that have been known to tag along with MAS on occasion.

Lessons for the broader AI and automation discussion

If you’re interested in geeking out on the technology itself, take a look at the MAS 400 website. An aspect of the project I personally find particularly fascinating, though, is the way in which risks have been thought about and addressed, and the lessons that can be fed into the broader AI and automation discussion.

At the time of writing, MAS 400 hasn’t yet completed its transatlantic mission. The first attempt was aborted, not because of issues with the AI or other onboard tech, but because of mechanical failure – specifically a problem with a critical coupling. As Stanford-Clark put it: “We can build redundancy and automated failover into the software and systems that run the ship, but you can’t do that with all elements of the physical structure”. This highlights the importance of engineering quality and robustness when it’s not possible for a human to step in and replace or fix a failed component. This in turn raises the question of regulation.

From tangling with regulators in various parts of the world, Phaneuf has some very strong opinions in this area: “I don’t like it when people make up laws because they don’t understand the technology or my motivation – or just don’t like the idea of what I’m doing”. He highlights the need to focus on the intent and practicalities when interpreting the law: “We’ve been challenged about not keeping a lookout by sight and by sound as the regulations say. But the regulations don’t specify that this requirement must be met by a human on the boat. Our systems are far more vigilant and sense with far more resolution than any human would be capable of, and they never get tired or distracted. And when people say we are going to apply the manned workboat code, I say it’s not a workboat and it’s not manned”.

On that point, Phaneuf argues that we need to reframe the discussion: “The purpose of a lot of maritime law is safety of life at sea, and the ultimate way to keep people safe at sea is not to put them there. In my opinion, regulatory agencies should be promoting unmanned technology as quickly and loudly as they can”.

Redefining regulation via real-world experience

In line with this, one of the project’s objectives is to work with regulators to help define or redefine regulation where appropriate based on real world experience. Indeed, it’s clear when listening to Phaneuf and Stanford-Clark that risk management and a high regard for safety are central to their thinking.

In practical terms, context is important here. Risks out on the open seas, for example, are mostly defined by the weather. This is why MAS receives continuous updates from state-of-the-art modelling and forecasting systems, as well as monitoring local conditions through its onboard array of sensors. Near-shore presents a different set of risks. In this context, MAS might encounter swimmers, paddle boarders, jet skiers and so on. Here the ability to tap into 4G and 5G to enhance information gathering can help, but as a safeguard MAS is always accompanied by a manned vessel when operating within 12 miles of the coast.

But what about the danger of MAS being hacked or hijacked? There’s not enough room here to go into the team’s full response; suffice it to say that measures such as proprietary, encrypted messaging and continuous remote monitoring are used to mitigate the risks. The upshot is that the investment required for a bad actor to take control of the vessel is highly unlikely to be worth it. And when questioned about fanatics who might try hacking MAS for fun because of the challenge, Phaneuf replied: “You can’t not do things because there are jerks!”.

So where is the project going from here? Well, one of the obvious objectives is to complete the transatlantic voyage, something the team is preparing for at the moment. With regard to the technology itself, the possibilities for unmanned craft in areas such as environmental research and monitoring are really interesting. So too is the idea of using similar systems to enhance the safety of manned vessels. Whether in the context of commercial, military or pleasure vessels, an ‘AI First Officer’ that’s hyper-aware and always alert can act as companion to a human captain to reduce risks and improve operational efficiency.

I’m going to be following this project closely from here on in as directly or indirectly it’s addressing many of the big questions currently circulating in the IT industry. In the meantime, though, let me give you one last quote from Phaneuf: “I don’t believe we should set out on a course in our society where everything is codified and regulated before you create it”.

Oversight and, yes, wariness remain vital

While I agree with this sentiment, I personally think a degree of wariness is still appropriate. Whether down to naivety, negligence or greed, we have already seen how unregulated use of advanced technology can lead to issues when the wrong people are in control. Just consider the algorithm-fueled culture wars and social division, or the use of advanced profiling to swing elections through micro-targeted misinformation. While it’s hard for regulators to keep up and figure out how to act, a degree of oversight is still required.

This is why projects like MAS, driven by people who are open and responsible, as well as curious and innovative, are incredibly valuable to bring objective thinking and experience to discussions of risk and regulation as emerging technologies continue to push the boundaries.

Click here for more posts from this author

Dale is a co-founder of Freeform Dynamics, and today runs the company. As part of this, he oversees the organisation’s industry coverage and research agenda, which tracks technology trends and developments, along with IT-related buying behaviour among mainstream enterprises, SMBs and public sector organisations.