I believe that systems will receive high-level strategic goals ("Build one car according to the following specification: ...") and will be autonomous in the way of how they achieve these goals, together with people

Robots & Humans Cooperating and Coexisting in the Future

Simon Mayer | University of St. Gallen

How will technology support the interaction and collaboration of people and machines (in particular: robots)?

In short, we require technologies to overcome information silos for integration across systems and we need novel HCI mechanisms for integrating people as well. Let's consider the information silos first: in this context, we view each machine/robot as an information silo that contains, for instance, information about the abilities of the machine and how to communicate with the machine to enact/use these abilities. For machines to work together in a more autonomous way, they need to be able to understand one another's functionality and be able to use that functionality, thereby enabling the automatic composition of higher-level plans of action that involve multiple machines (see https://ieeexplore.ieee.org/document/7444198). To enable this, researchers such as our team at the University of St. Gallen and standardization bodies (e.g., the Web of Things Working Group of the World Wide Web Consortium) are currently investigating ways of creating, presenting, advertising, and discovering so-called "functional profiles" that are attached to machines and specify their abilities and interfaces in a machine-readable way, and techniques of composing such functional profiles to yield composite plans (see faamas.org/Proceedings/aamas2018/pdfs/p813.pdf)

Now, let's turn to the people: People will interact with such "partially autonomous" systems in two main ways: first, they will need an intuitive way of specifying high-level goals for groups of machines/robots (along with virtual services); second, they will directly interact with these systems and therefore need ways of monitoring what partially autonomous machines are up to - for purposes of safety and also to just plainly be aware of what their environment is doing, a crucial factor for user acceptance of autonomous systems. Needless to say, these interfaces need to complement low-level safety mechanisms that are built into e.g., industrial robots.

 

How autonomous do you think robots will become in the factory environment over the next 5 to 10 years?

This relates to the question above of course: I believe that systems will receive high-level strategic goals ("Build one car according to the following specification: ...") and will be autonomous in the way of how they achieve these goals, together with people - i.e. the tactical integration of interfaces within plans of action and the execution of these plans. This is the technological viewpoint; from an industrial perspective, one of the main obstacles to this actually being deployed is the way of how industrial equipment is commissioned, which assumes much more static devices that keep doing the same thing over and over, rather than dynamically composing their functionality in response to strategic goals.

 

How will users know what autonomous machines are up to?

Here, we need to find answers to two questions: First, how do we deliver this information? Second, what information do we deliver?

Since this information is usually local in nature and needs to be presented locally and timely, I believe that - generally speaking - this would be an ideal application for Mixed Reality technology: overlaying an information layer of "what is this machine doing" or "what systems is this machine communicating with" over the physical machine, in-situ (https://graz.pure.elsevier.com/en/publications/holointeractions-visualizing-interactions-between-autonomous-cogn). But this is context-dependent: if the information is personal in nature, wearable Mixed Reality headsets such as the Magic Leap One hardware can do the trick. If it is not, we can use augmented reality projections. If the information cannot - or should not - be shown graphically, we might be able to use ambient sound to place cues about machine actions. This brings us to the other question of what to show, which depends on context even more: in industrial, roboticised, environments, the immediate answer is perhaps that we should directly visualize, as a graphical overlay, what movements a robot will perform next; but over time, people might start to trust these robots and start to be interested in other kinds of information, such as Why the (autonomous) robot is performing a specific action, and this will call for different types of overlays.

 

How will the robot know what the user is up to?

Contextual sensors such as motion sensing devices / body trackers together with algorithms that can identify users are being used for this purpose in research labs. In real environments, the major obstacle is however not the monitoring technology - rather, this kind of instrumentation conflicts with workers' rights to (at least partial) privacy at the workplace, and the direct monitoring and tracking of workers is unlawful in many countries, particularly in the EU. These issues will need to be resolved as, importantly, they are connected with user acceptance as well.

 

Is there a safety problem? How can technology improve the safety of people who work in human-robot collaborative environments?

Removing the cages around large robots and thereby creating "collaborative robots" directly implies safety risks that need to be mitigated with appropriate sensing equipment and fail-safe technologies - novel fail-safe may even be able to predict what people are about to do - we started calling this concept "Predictive Fail-Safe" (https://graz.pure.elsevier.com/en/publications/predictive-fail-safe-improving-the-safety-of-industrial-environme). Coming back to the automated planning in response to strategic goals that I mentioned above, there is however another highly interesting avenue that we've explored: we taught work safety rules and regulations to the algorithm that plans machine interactions given one of these goals - given this additional information, the algorithm can directly avoid safety hazards for people (e.g., by having a robot lift heavy objects instead of a person) and it can implement mitigation strategies if a hazard cannot be avoided (e.g., by reminding workers to wear the appropriate safety equipment). In our work, which was published last month, we codified several rules of the U.S. Occupational Safety and Health Administration and demonstrated how this can be achieved (https://2018.semantics.cc/ensuring-workplace-safety-goal-based-industrial-manufacturing-systems).

 

And what about the collaboration among machines - will we need to explicitly program machine interactions in the future, just like we do today?

We'll need to "program", i.e. "define" the mentioned strategic goals and will need interfaces for this. But this will be a different kind of programming, as it will be declarative instead of imperative.

 

Today we are excited about collaboration with robots and humans. What about tomorrow? Will humans disappear from the factory floor and be relegated to glassed in control booths … or will those disappear too?

In my opinion, people and robots will have complementary abilities for years to come: even if something is technically possible (e.g., extreme dexterity for a robot), it is often not economically feasible. So, robots and people will be collaborating for some time in the workplace, and we should consider that we have been working cooperatively with machines - washing machines, dishwashers, etc. - at home for some time, even if we don't consider them robots. However, the level of this cooperation will be deeper: more collaboration and direct interaction in a shared physical space. Over time, robots will most probably take over more and more tasks that people take care of today - most prominently "TBD" tasks: tasks that are Tedious, Boring, or Dangerous. Consequently, I agree that people will take over more and more supervision and monitoring roles over time. Disappearing control booths would require deeper user acceptance however: in many industries (e.g., smart grid management), automatic decision-making is not desired, i.e. the human must remain in the loop.

 

 

About Simon Mayer
Simon Mayer is a professor of computer science at the University of St. Gallen where he heads the chair for Interaction- and Communication-based Systems. Before, Simon was leading the research group on Cognitive Products at the Austrian research center Pro²Future, and he was part of Siemens' Web of Things research group in Berkeley, California in the role of a Senior Key Expert for Smart and Interacting Systems. His main research topics are aspects of integrating smart things into the Web, their semantic description, and infrastructures that support human users as well as other machines in finding and interacting with the information and services provided by such devices. As a visiting researcher at the Laboratory for Manufacturing and Productivity at MIT in the year 2012, he was also working on robustly and securely bringing live data from automobiles to the Web. Simon graduated with a PhD in computer science from ETH Zurich in 2014.

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Schmalz Technology Development - The Right Gripper for Every Task

Schmalz Technology Development - The Right Gripper for Every Task

In order to interact with their environment and perform the tasks, lightweight robots, like all industrial robots, depend on tools - and in many cases these are vacuum grippers. These form the interface to the workpiece and are therefore a decisive part of the overall system. With their help, the robots can pick up, move, position, process, sort, stack and deposit a wide variety of goods and components. Vacuum gripping systems allow particularly gentle handling of workpieces, a compact and space-saving system design and gripping from above. Precisely because the object does not have to be gripped, the vacuum suction cupenables gapless positioning next to each other.