deep-machine-learning-metaphors-624x468.jpg

Reflective and Resilient Autonomous Systems: The Science and Engineering of Self Help

Autonomous systems quickly trigger thoughts and images of self-driving vehicles. These are obvious examples of what will most likely be an impending revolution (or rapid evolution?) of autonomy in everyday life. But to limit conversations on autonomy to these sorts of vehicles is short-sighted. There are autonomous applications currently deployed (or about to be deployed) across many realms. Spacecraft have been essentially autonomous for many years, and there is great potential for autonomy in maritime shipping.

The recent movie ‘Passengers’ involves an autonomous spaceship that self-healed. It proactively managed other systems to mitigate damage to complete this mission. The obvious shortcomings in its autonomous framework essentially established the movie plot, but it did provide a glimpse at what autonomy could and should be: resilient.

There is not a lot of research in this space. Substantial effort goes into ‘asset management,’ but the emergence of ‘big data’ analytics, computational power, and failure mechanics allow something more profound than what we currently see. Intelligent Physical Systems (IPS) will need to be resilient – not just be able to navigate intersections or avoid icebergs. They need to be able to learn not only from their own experience, but from the world around them. In fact, a system can be an IPS if it does nothing else but focus on resilience. But to be able to learn, these systems need to be ‘motivated.’

When discussing motivation, we need to talk ‘value.’ Value is a measure of the net benefit or preference provided by a system or service to a stakeholder. A mining-excavator IPS could be instructed to maximize value in terms of the profit associated with its activities. It should be able to understand its ‘current status,’ control the effort demanded of its own sub-systems to (amongst other things) optimize reliability, ‘demand’ maintenance when it sees fit (perhaps paying attention to things like daytime versus night-time wages) and so on. It should be able to automatically switch modes depending on the availability of systems around it to handle its raw material and it should be able to handle an emergency by itself, minimizing risk to those around it. It needs to understand and be motivated to maximize ‘value.’

This breaks down the challenge of developing a resilient IPS into four key areas: valueinformation, knowledge, and action. These areas allow segregation of parallel but supporting research. A method of defining a value utility needs to be generated, with the IPS able to quickly learn what an organizational truly values even if guidance is initially vague. The ability to process information through techniques that include machine learning need to be developed. In short, humans need to be able to provide it with all the information at hand and the IPS works out what to do with each information element. Information needs to be analysed in a way that aligns with the value utility to create knowledge: understanding value means the IPS understands which information is relevant or not. And finally, the IPS needs to act on its information. This could be something as basic as changing its maintenance regimes automatically, or something as complicated as re-routing effort through its components and sub-systems to realize its overarching objective, or simply change its operation.

All IPS will need to be resilient whether it is realised now or not.

1. Project Description

To break down the project into digestible research chunks, the following terms are used to describe various parts of the Resilient IPS system:

  • Ears that are able to ‘hear’ information. Information can be provided from every conceivable source ranging from sensors placed on hardware to automated internet search.

  • Tools that are able to process information through the process of causal inference. Included in this are emerging concepts of machine learning.

  • Memory that stores knowledge generated by tools that use information incorporated by the ears, along with operating experience (such as what previous actions achieved or did not achieve).

  • Motivator that is able to interpret direct or vague directions and motivate the system to do something that they think will ‘please’ their human counterparts.

  • Brain that determines courses of actions that incorporate knowledge stored in the memory based on their motivation.

  • Hands that implement the courses of actions that the brain has determined will most likely please the humans who provided the direction.

The final ‘product’ is actually not a set of ears, tools, memory, motivations, brains, and hands. It is the framework itself, which represents the ‘consciousness’ many people associate with autonomous systems. The product must be (for example) capable of accommodating an ever increasing set of inference tools, an ever diversifying set of information sources, memory that can exploit increasing computer storage device capacity and so on.  This framework will be discussed after each element above is outlined below.

1.1 Ears – Listening for Information

Resilient IPS need to move past human generated ‘spreadsheet or tabulated data.’ Resilient IPS should be able to easily access parts information, titles, personnel names, etc. from which insightful tools could identify links that were previously unknown. Simply ‘knowing’ based on the name of a component what its likely materials are, along with manufacturer details, is very useful. With the amount of big data analytics that is available these days, this information can easily be inferred with high certainty based on scant data… even knowing what a component interfaces with can yield information that may not have been provide to the IPS when initially designed.

Given that understanding materials allows Physics of Failure (PoF) approaches to be implemented, the Resilient IPS could be given access to the internet to actively search for emerging models of failure in various publications. The IPS needs to be able to (for example) replace initially stochastic models of failure with deterministic models should this be possible. This needs to be done autonomously without human control.

1.2 Tools – Inferring knowledge from Information

There are already a wide set of inference tools, but their scope is expanding in terms of capability and utility. Basic inference techniques (for example) rely on Monte Carlo simulation. Monte Carlo simulation is constantly improving in terms of the algorithms used to execute them. It is reasonable to expect that more and more ‘inference tools’ will be developed over time to help infer knowledge from information.

A key tool is machine learning. Machine learning is very useful when patterns that can lead to the discovery of causal relationships are not obvious to humans. Machine learning is most useful when confronted with ‘big data.’ The number of sensors associated with contemporary materials effectively produces ‘big data.’ Electric cars have virtually every aspect of operation subject to sensors, even if it is not understood if the ensuing data will be useful.

Resilient IPS needs to be provided a ‘toolbox,’ from which specific tools are selected on a scenario-specific basis. Resilient IPS needs to be able to choose which tool to use, how well it will work, and how ‘value’ will be added to the organization in doing so. Tools that support other tools will need to be automatically selected. And once knowledge has been gained, it needs to be stored.

1.3 Memory

Storing knowledge (beyond data) can be problematic. The Resilient IPS will need to be able to establish its own ‘metrics of interest’ and store them for subsequent use. It is likely that this will need use a human-generated taxonomy (or ‘language’) that can be suitable used thereafter. The metrics of interest will have inherent uncertainties. They could be summarized simply using parameters of known probability distributions. They could also be summarized using samples – which will become increasingly relevant for joint-metric distributions.

These metrics of interest can also have their uncertainty reduced over time by additional inference on available information. This could be resource intensive, meaning that inference tasks will need to be appropriately allocated and scheduled. Memory itself may be limited in terms of size, meaning that metrics of interest may need priorities associated with them.

1.4 Motivator

Inherent in the Resilient IPS is the motivation to do ‘something.’ This could be (for example) operating profit associated with an excavator. There will obviously be multiple considerations (that include safety), but essentially the motivator will be focused on maximizing some sort of utility. The key is to finding that utility, based on some underlying principle of trying to ‘please.’ This utility could be given explicitly, (such as a human informing a system to maximize profit), but also needs to be able to develop it on its own based on vague guidance. Even in scenarios where explicit guidance is given, Resilient IPS need to be able to ‘amend’ the absolute nature of that guidance as it continues to learn. For example, profit may be maximized by having a single human crew operate continuously – which is not possible. The Resilient IPS needs to be able to learn about this.

1.5 Brain

The brain of the Resilient IPS is responsible for deciding what to do based on its motivation. There are two broad categories of decisions.

  • Inference Actions. This could be inferring knowledge from information at hand to simply get a better idea of ‘utility’ or what the ‘humans’ want. It can also be the inference actions associated with the motivation it has created by understanding utility.

  • Executive Actions. These actions include things like redistributing internal demand for subsystems, or requesting specific maintenance actions from a support crew. It could be limiting its functions, or anything else that the system has within its power to control. Importantly, an executive action includes initiating a search for more information. If there is too much uncertainty involved, the brain needs to be able to identify an information deficiency or gap and launch an action to resolve this. Executive action decisions primarily informs the ‘hands’ element of the resilient IPS.

1.6 Hands

The last element of the Resilient IPS is its ‘hands:’ how it implements executive actions. These are both platform and technology specific, and could be as simple as sending an email to a human to do something.

2. Strategy

Attempting to create the system described above as a single discrete body of work is beyond current resource scopes of current institutions. A ‘piecemeal’ approach under an umbrella framework will be required, where discrete ‘sub-projects’ are undertaken that iteratively develop the overarching system. These sub-projects may be ‘projects’ in their own right. For example, consider the ‘sub-project’ below that deals with a Resilient IPS learning a lot about a component based on failure data.

These ‘sub-projects’ would be repeated, all within an overarching framework that is built upon each time a sub-project is completed. For example, the ‘taxonomy’ used by the ‘memory’ would be improved upon over time, becoming more general. A universal taxonomy by field could in itself be a ‘sub-project’ in its own right.

3. Summary and the Way Ahead

Resilient IPS represent perhaps a ‘less visible’ form of autonomous system. However, with emerging autonomous systems pushing boundaries in terms of operational employment, they will need to be resilient. Applications range from spacecraft to safety critical systems – many accidents have stemmed from organizational inability to understand ‘precursors’ and take steps to mitigate them.

From here, participating organizations need to collaborate to agree upon the ‘framework concept’ above and plot the way ahead. Delineation of responsibility will be key, along with an understanding of respective organizational funding strengths and weaknesses.