Member Login

ARTICLE: From Lab to Market (Part II): Bridging the Gap – Solutions for Effective Industry-Academic Collaboration

In today’s rapidly evolving technological landscape, the synergy between academic research and industrial innovation has never been more critical. Yet, as we explored in our previous article, significant barriers often hinder effective collaboration between these two sectors. From misaligned incentives to communication challenges, the road to fruitful partnerships is fraught with obstacles. However, where there are challenges, there are also opportunities for transformative solutions. In this article we will investigate how we can overcome these barriers between academic-industry collaborations and foster more productive collaborations? Here are some strategies I believe could make a significant difference:

1. Educational Outreach

  • Host Workshops and Seminars: Organize events that showcase research capabilities and potential benefits to industry partners. These can help demystify the research process and highlight its value.
  • Develop Industry-Focused Communication: Create materials that explain research in terms of business benefits, ROI, and practical applications.
  • Utilize social media: Leverage platforms like LinkedIn to share success stories, insights, and opportunities for collaboration.

2. Flexible Collaboration Models

  • Short-Term Projects: Offer opportunities for smaller, shorter-term collaborations that can serve as ‘proof of concept’ for more extensive partnerships.
  • Tiered Partnership Options: Develop a range of partnership models to suit different company sizes, budgets, and comfort levels with research collaboration.
  • Shared Resource Models: Create systems where multiple industry partners can share the costs and benefits of research initiatives.

3. Build Trust and Understanding

  • Industry Internships for Researchers: Encourage academic researchers to spend time in industry settings to better understand business needs and processes.
  • Academic Sabbaticals for Industry Professionals: Invite industry professionals to spend time in academic settings, fostering better understanding and communication.
  • Joint Advisory Boards: Establish boards with both academic and industry representation to guide research directions and collaboration strategies.

4. Address Financial Concerns

  • Highlight Long-Term ROI: Develop case studies and financial models that demonstrate the long-term return on investment for research collaborations.
  • Explore Public-Private Partnerships: Leverage government funding and initiatives designed to promote industry-academic collaborations.
  • Transparent Cost Structures: Develop clear, understandable cost structures for different types of collaborations to help businesses budget effectively.

5. Streamline Processes

  • Simplify Administrative Procedures: Work on streamlining the often-complex administrative processes involved in setting up research collaborations.
  • Dedicated Liaison Officers: Appoint individuals specifically tasked with facilitating and managing industry-academic partnerships.
  • Clear IP Agreements: Develop straightforward intellectual property agreements that protect both academic and industry interests.

The Path Forward

The future of innovation lies in the synergy between academia and industry. By working together, we can drive progress, enhance productivity, and tackle real-world challenges more effectively. It’s a journey that requires effort, understanding, and adaptability from both sides, but the potential rewards are immense.

As we move forward, I’m eager to hear from both my academic colleagues and industry professionals:

  • What challenges have you faced in establishing or maintaining industry-research collaborations?
  • What successful strategies have you employed to overcome these barriers?
  • How do you envision the future of industry-academic partnerships in your field?

As we explore these solutions, we’ll highlight the valuable contributions of organizations like the Australian Cobotics Centre. This pioneering training institution has been at the forefront of addressing the barriers between academia and industry, particularly in the field of collaborative robotics. Through its unique model of industry-led research, the Centre has been instrumental in developing practical solutions that not only advance academic knowledge but also address real-world industrial challenges. By examining the Centre’s approach, we can gain insights into effective strategies for overcoming the traditional divides between research institutions and commercial enterprises.

Let’s continue this crucial conversation in the comments below. By sharing our experiences and ideas, we can work together to build stronger, more productive bridges between the world of research and the world of industry.

ARTICLE: Accepted Papers for the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

Australian Cobotics Centre researchers have two papers accepted for publication at the upcoming IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2024 in Abu Dhabi. IROS is one of the largest and most important robotics research conferences in the world, attracting researchers, academics, and industry professionals from around the globe.

Postdoctoral Research Fellow, Dr Fouad Sukkar gave is a brief summary of two of the papers appearing at the conference in October this year.

Constrained Bootstrapped Learning for Few-Shot Robot Skill Adaptation, by Nadimul Haque, Fouad (Fred) Sukkar, Lukas Tanz, Marc Carmichael, Teresa Vidal Calleja, proposes a new method for teaching robot skills via demonstration. Often this is a cumbersome and time-consuming process since a human operator must provide a demonstration for every new task. Furthermore, there will inevitably be some discrepancies between how the demonstrator carries out the task versus the robot, for example, due to localisation errors, that need to be corrected for in order for the skill to be successfully transferred. This paper tackles these two problems by proposing a learning method that facilitates fast skill adoption to new tasks that have not been seen by the robot. We do so by training a reinforcement learning (RL) policy across a diverse set of scenarios in simulation offline and then use a sensor feedback mechanism to quickly refine the learnt policy to a new scenario with the real robot online. Importantly, to make offline learning tractable we utilise Hausdorff Approximation Planner (HAP) to constrain RL exploration to promising regions of the workspace. Experiments showcase our method achieving an average success rate of 90% across various complex manipulation tasks compared to state-of-the-art which only achieved 56%.

Coordinated Multi-arm 3D Printing using Reeb Decomposition, by Jayant Kumar , Fouad (Fred) Sukkar, Mickey Clemon, Ramgopal Mettu, proposes a framework for utilising multiple robot arms to collaboratively 3D print objects. For robots to do this efficiently and minimise downtime while printing, they must have the flexibility to work closely together in a shared workspace. However, this dramatically increases problem complexity since there is a need to coordinate the arms so they do not collide with each other or the partially printed object. This is in addition to the planning problem of effectively allocating parts of the object to each robot while respecting the physical dependencies of the print, for example an arm can’t start extruding a contour until all the contours below it are printed first. All these factors make effective coordination a very computationally hard problem and we show that with bad coordination you can end up with even worse utilisation than if a single arm had carried out the same print! In this work we address this by performing a Reeb decomposition of the object model which partitions the model into smaller, geometrically distinct components. This drastically reduces the search space over feasible toolpaths, thus allowing us to plan highly effective allocations to each arm using a tree search-based method. For producing fast collision avoiding motions we utilise Hausdorff Approximation Planner (HAP). Our experimental setup consists of two robot arms with pellet extruders mounted on their end effectors. We evaluate our framework on 14 different objects and show that our method achieves up to a mean utilisation improvement of 132% over benchmark methods.

ARTICLE: Enhancing Human-Robot Collaboration: The Role of Extended Reality

In advanced industries, the integration of Extended Reality (XR) technologies into Human-Robot Collaboration (HRC) presents unprecedented opportunities and challenges. XR, encompassing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), plays a crucial role in overcoming barriers to HRC adoption across various sectors. This article introduces the current applications of XR in HRC, addressing aspects such as types and roles, design guidelines and frameworks, and devices and platforms. It also provides insights into the future direction of XR in HRC, highlighting its potential to enhance collaboration and efficiency in industrial environments.

Extended Reality

In general, Extended Reality (XR) serves as an umbrella term for immersive technologies like Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR). Virtual Reality immerses users in a completely computer-generated environment (including visual, acoustical, tactile information), while Augmented Reality enhances the real-world environment by overlaying digital information or objects onto it. Specifically, Mixed Reality (MR) refers to formats that bridge the gap between reality and Virtual Reality.

In Human-Robot Collaboration (HRC), XR technologies are trending towards enhancing safety, improving workspace design, data visualisation, training operators, and creating more intuitive user interfaces due to their capability to visualise unseen information in the physical world in real time. These applications are closely linked to aiding human decision-making. By enhancing safety, XR technologies reduce the cognitive workload on operators, allowing them to focus on critical decision points. Well-designed XR-enabled workspaces facilitate the seamless integration of human and robotic workflows, boosting collaboration and efficiency. Advanced visualisation and immersive training capabilities provided by XR tools give operators a better understanding and control, leading to higher quality and precision in their decisions. Intuitive XR-based interfaces improve human-robot interactions, resulting in faster and more efficient decision-making. This effective decision-making is crucial in complex and dynamic HRC environments.

Extended Reality in Human-Robot Collaboration

From 2023 onwards, research has explored various types of XR technologies applied in Human-Robot Collaboration (HRC), including Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). Generally, XR is primarily used as an interface. Additionally, XR serves multiple roles such as development environments, learning environments, platforms for design, visualisation, simulation, instruction and guidance, task and motion planning, and more.

Currently, VR is used as an interface, evaluation tool, simulation platform, task and motion planning aid, learning environment, design tool, and for data collection. Conversely, AR overlays digital information onto the real world, making it ideal for enhancing and augmenting real-world interactions. MR blends the physical and digital worlds, providing immersive experiences that enhance real-time interactions and task execution. The distinction between AR and MR is often unclear, with AR considered a subset of MR. Telepresence, achievable by combining VR and MR, allows multi-human-robot teams to collaborate from different locations.

In current research on XR in HRC, various XR devices such as (Head-Mounted Displays) HMDs, mobile devices, and projectors are utilised. While HMDs are commonly employed, projectors are sometimes used for AR-based interfaces in HRC. Additionally, mobile devices like tablets are utilised for AR-based visualisation, instruction and guidance, interfaces, and training.

Regarding software and tools for developing XR in HRC, the game engine Unity is the most popular choice. In specific areas such as HRC fabrication, Building Information Modelling (BIM) platforms, and Computer-Aided Design (CAD) platforms like Rhino 3D and Grasshopper are used. Unity is generally preferred because it is powerful enough to support various platforms and users.

The Future of Extended Reality in Human-Robot Collaboration

Recently released HMDs such as Varjo XR-4 and Apple Vision Pro, AR goggles such as Xreal Air 2 Pro and Viture Pro show considerable promise for future use in HRC. The newest HMDs feature enhanced display resolution, refresh rates, and reduced latency, making them increasingly powerful. Conversely, AR goggles are lightweight while still offering high resolution and refresh rates. Moreover, mobile devices such as tablets and smartphones remain highly accessible and user-friendly for mobile AR applications, continuing to be a viable option for future use. The potential of Unreal Engine and WebGL also warrants further exploration. Unreal Engine provides photorealistic visuals for the most immersive visualisations, while WebGL enables users to interact through web-based applications from various locations and devices, enhancing accessibility and flexibility.

Current designs often focus either on XR or HRC without sufficient attention to user experience and human factors. Therefore, future research should integrate human factors and user-centric approaches to enhance the effectiveness and usability of XR in HRC. This comprehensive analysis highlights the importance of combining advanced XR technologies with human-centric design to optimise human-robot collaboration.

 

 

 

ARTICLE: From Lab to Market (Part I): Navigating the Obstacles in Academic-Industry Collaborations

As a researcher deeply invested in advancing knowledge and innovation, I’ve consistently encountered a significant challenge: securing meaningful partnerships with industry. This gap between academia and industry isn’t just a personal observation; it’s a widespread issue that affects the pace of innovation and the practical application of cutting-edge research. Today, I’d like to dig into why this disconnect exists.

The Barriers to Collaboration

  1. Time Constraints

In the fast-paced world of industry, time is often equated with money. This perspective can create significant barriers to research collaboration:

  • Research Timelines: Academic research often operates on longer timelines, sometimes spanning years. This can clash with the quarterly or annual targets that drive many businesses.
  • Production Slowdowns: There’s a prevalent fear that engaging in research might slow down existing production processes or divert resources from immediate business needs.
  • Return on Investment (ROI) Concerns: Companies often struggle to see the long-term benefits of research when faced with short-term pressures to deliver results.
  1. Financial Considerations

The financial aspect of research collaboration is another major hurdle:

  • High Costs: Cutting-edge research often requires significant financial investment in equipment, materials, and personnel.
  • Limited R&D Budgets: Many businesses, especially small and medium enterprises, lack dedicated research and development budgets.
  • Risk Aversion: There’s an inherent uncertainty in research outcomes, making it a risky investment from a business perspective.
  • Funding Complexities: The procedures for securing and managing research funding can be complex and time-consuming for businesses unfamiliar with academic processes.
  1. Knowledge Gap

Perhaps the most insidious barrier is the knowledge gap that often exists between academia and industry:

  • Technological Unfamiliarity: Many industries are comfortable with their current technologies and processes, making them hesitant to explore new, unproven methods.
  • Resistance to Change: There’s often a cultural resistance to change within established industries, making it difficult to introduce new research-based innovations.
  • Communication Challenges: Researchers and industry professionals may struggle to communicate effectively due to differences in jargon, priorities, and perspectives.
  • Lack of Awareness: Many businesses simply aren’t aware of the potential benefits that academic research could bring to their operations.

The Importance of Collaboration

Despite these challenges, the importance of industry-research collaborations cannot be overstated:

  • Innovation Acceleration: When academics and industry professionals work together, it can dramatically speed up the process of turning theoretical knowledge into practical applications.
  • Real-World Problem Solving: Industry partners provide researchers with insights into real-world challenges, helping to guide research in the most impactful directions.
  • Economic Growth: Successful collaborations can lead to new products, services, and even entirely new industries, driving economic growth.
  • Skill Development: These partnerships provide valuable opportunities for skill exchange, benefiting both academic researchers and industry professionals.

While the benefits are clear, bridging the gap between academia and industry remains a complex challenge. In our next article, we’ll explore potential solutions to strengthen these crucial partnerships. Stay tuned for “Bridging the Gap: Solutions for Effective Industry-Academic Collaboration”.

ARTICLE: Addressing gender pay disparities in engineering

Manufacturing is one of the top 3 engineering-heavy sectors in Australia, employing more than 46,000 qualified engineers. The manufacturing sector currently has a 70% male workforce, as discussed by Australian Cobotics Centre PhD candidate Akash Hettiarachchi in his recent webinar. The importance of gender equity to Australia’s global competitiveness in manufacturing was also highlighted in a recent parliamentary inquiry, which recommended a national strategy to attract and retain under-represented groups (including women) to advanced manufacturing careers. Manufacturing organisations, government departments and industry bodies are making concerted efforts to increase gender balance in the sector so they can achieve the benefits of a diverse workforce. 

At present, only 14% of engineers working in Australia are women. I was recently invited by the Australasian Tunnelling Society and Engineers Australia to present and be part of a panel at an International Women in Engineering Day (INWED) event, Bridging the Gap: Addressing Gender Pay Disparities in Engineering. INWED celebrates women’s contribution to the engineering profession and the 2024 theme is Enhanced by Engineering. However, in all industry sectors and occupations in Australia and most of the world, women’s contribution is still under-valued in terms of pay.  

The current gender pay gap in Australia (the difference between the average earnings of men and women), is 21.7% including full time, part time and casual workers and payments such as bonuses, overtime and commission. This means that on average, for every $1 a male worker makes, a female worker makes 78 cents. The gap is still 13.7% even when only including the base salaries of full-time workers. National statistics, the international Global Gender Gap Index, company reporting, and research show that a gap exists even when considerations such as experience and education are controlled for, and only part of the gap can be attributed to different career choices. A gender pay gap exists across nations, industries, occupations and at different levels of pay. It is however higher in male dominated industry sectors, industries with higher bonus, overtime or commission payments, higher paid roles, and organisations with fewer women in leadership. 

At the Bridging the Gap event, we discussed the gender pay gap, the policy and reporting framework in Australia, and actions that individuals, managers and organisations can take to address pay disparities.  

For the first time in 2024, the Workplace Gender Equality Agency (WGEA) published the gender pay gaps of all private sector employers with 100 or more staff members. The WGEA Data Explorer provides a rich source of data for anyone interested in the gender equity performance, policies and strategies of their own and other organisations. As well as gender pay gap data, policy and action, you can use the WGEA Data Explorer to see and compare industry and employer data on other indicators including the composition of the workforce and boards, access to and use of flexible work and parental leave by men, women and managers, employee consultation and harassment. Initiatives such as conducting and acting on the results of a gender pay audit, making pay more transparent, increasing the proportion of women in leadership, identifying and removing gender bias from recruitment and promotion decisions, and encouraging men to access flexible work and parental leave can all improve the gender pay gap.  

Australian Cobotics Centre Program 5 (The Human-Robot Workforce) has several researchers with experience in researching gender equity. We can assist companies of all sizes to consider how they can evaluate gender equity and realise the benefits for their organisation.  

ARTICLE: Enhancing Hydraulic Maintenance Operations with Multi-modal Feedback

Hydraulic systems are integral to industrial applications that require significant force, such as mining and manufacturing. Despite their power and efficiency, traditional hydraulic systems pose operational risks, especially when relying on binary controls and low-resolution feedback mechanisms. To address these challenges, a research team from the University of Technology, Sydney, led by Danial Rizvi, explored the potential of multi-modal feedback to enhance safety and performance in hydraulic maintenance operations.

The Challenges of Traditional Hydraulic Systems

In industrial settings, hydraulic systems are essential for tasks like installing and removing bushings and bearings. However, these systems typically use binary controls, limiting operators to simple open or close actions. This lack of precision can lead to operational errors and safety risks. Operators often rely on visual and auditory cues, which can be inconsistent and unreliable, increasing the potential for accidents and equipment failure.

Multi-modal Feedback: A New Approach

The research aimed to improve hydraulic maintenance operations by integrating haptic feedback through an adaptive trigger mechanism. This approach provides operators with tactile feedback, simulating the pressure build-up in hydraulic systems. The study compared the effectiveness of this haptic feedback against traditional visual and auditory cues.

Methodology

The team conducted a user study involving 10 participants operating a simulated hydraulic system using a re-programmed DualSense controller. This controller provided four types of feedback: force (through adaptive trigger resistance), visual (pressure readings), sound (auditory cues), and vibration (tactile cues). Participants performed tasks under different feedback conditions to evaluate the impact on performance and user experience.

Performance Analysis

The study measured three key performance metrics: elapsed time, final pressure (PSI), and extension percentage. The results showed no significant differences in task performance across the different feedback types. However, participants expressed a preference for the adaptive trigger in subjective evaluations, noting that it enhanced their control and reduced cognitive load.

Subjective Ratings

Participants rated their comfort and confidence with each feedback type. The adaptive trigger received the highest median comfort rating, while the vibration feedback was the least preferred. Overall, the study found that while all feedback types enabled participants to achieve the desired hydraulic pressures, the adaptive trigger offered slight advantages in user comfort and perceived control.

Implications for Industrial Maintenance

The integration of haptic feedback into hydraulic systems holds promise for improving safety and efficiency in industrial maintenance. By providing operators with more precise and intuitive control mechanisms, multi-modal feedback systems can reduce reliance on less reliable sensory cues and enhance overall operational safety.

Future Research

Further research is needed to explore the long-term benefits of multi-modal feedback in diverse industrial environments. Expanding the participant pool and incorporating real-world scenarios will help validate these findings and refine the technology for broader application.

Conclusion

The study conducted by the University of Technology, Sydney, demonstrates the potential of multi-modal feedback to enhance hydraulic maintenance operations. While traditional feedback mechanisms remain effective, the adaptive trigger offers additional benefits in user comfort and control. As industries continue to evolve, integrating advanced feedback systems into hydraulic operations can lead to safer and more efficient maintenance practices.

References:

  • Danial Rizvi, Dinh Tung Le, Munia Ahamed, Sheila Sutjipto, Gavin Paul. “Multi-modal Feedback for Enhanced Hydraulic Maintenance Operations.” University of Technology, Sydney.

ARTICLE: Industry 4.0 Awareness and Experience Workshop

These workshops were organised and run by Swinburne University of Technology’s Factory of the Future and were funded through the Victorian Government’s Digital Jobs for Manufacturing (DJFM) program. 

This article is written by PhD researcher from Swinburne University of Technology, Jagannatha Pyaraka.

In a series of enlightening workshops, Swinburne University of Technology has taken significant step in bridging the gap between industry professionals and the transformative potential of Industry 4.0 technologies. Over the past few weeks, four workshops were organized at strategic locations to maximize outreach and impact. The workshops were held at the VGBO office in Bundoora, Holiday Inn Dandenong, Rydges Geelong, and Mercure Ballarat. These sessions aimed to raise awareness and provide hands-on experience with collaborative robots (cobots), a foundation of modern industrial automation and other Industry 4.0 technologies such as AR, VR and wearable sensors.

The workshops attracted operations managers, CEOs, CFOs, and other key decision-makers eager to understand the practical applications and benefits of cobots in their respective fields. Accompanied by my ACC colleague, Dr. Anushani Bibile, we used the easily portable and cost-effective UFactory xArm6 cobot to demonstrate cobotics functionality.

The workshops commenced with an introduction to collaborative robots. Unlike traditional industrial robots, which often require extensive programming and are confined to specific tasks, cobots are designed to share a workspace with humans. Their ease of programming, adaptability to various tasks, and advanced safety features make them suitable for dynamic and evolving industrial environments.

To illustrate these points, we demonstrated a program involving the stacking of four objects. The objects were placed in predefined positions, and xArm6 was tasked with picking each object and stacking them. This exercise highlighted the cobot’s ability to perform repetitive tasks and its intuitive programming interface. Using Blockly, a visual programming language, participants observed how quickly and easily they could teach the cobot to execute tasks.

Following the demonstration, participants had the opportunity to interact with xArm6. They used Blockly to program the cobot for a simple pick-and-place task. This exercise allowed them to experience the user-friendly interface and the cobot’s responsiveness. The feedback was positive, with many participants noting how quickly they could learn to program and operate the cobot.

The hands-on session helped to remove common misconceptions about the complexity and inflexibility of industrial automation. By the end of the workshop, participants had a better understanding of how cobots can be integrated into their operations to enhance productivity, safety, and cost-effectiveness.

The workshops also emphasized the cost-effectiveness of cobots. Unlike traditional robots that require significant investment in programming and setup, cobots like the xArm6 offer an affordable solution without compromising performance. Their advanced safety systems, which allow them to operate safely alongside human workers, make them a viable option for businesses of all sizes.

Specific feedback from participants highlighted the positive impact and value of these sessions. One attendee noted, “The workshop provided a great insight into how Industry 4.0 can better impact our business and automate our processes.” Another participant appreciated the practical demonstrations, stating, “It was great to see the practical applications during the demonstrations.” Many attendees emphasized that the hands-on experience was invaluable, with one remarking, “Cobots demo was very stimulating. Thoroughly enjoyed the workshop.”

Before the workshop, common reactions included uncertainty about the complexity and applicability of cobots in their operations. After the sessions, many participants expressed confidence in integrating these technologies into their workflows, recognizing the potential for improved efficiency and innovation.

Overall, these workshops effectively bridged the knowledge gap for attendees, providing them with the tools and understanding necessary to embrace Industry 4.0 technologies. As more companies recognize the benefits of automation, the demand for cobots is set to rise, paving the way for a more efficient and innovative industrial landscape.

 

ARTICLE: Enhancing Collaboration Between Humans and Robots: The Critical Role of Human Factors Research

This article is written by Jasper Vermeulen, PhD researcher at the Australian Cobotics Centre.

 

Integrating collaborative robots (cobots) in factory environments offers substantial benefits for businesses, including increased operational efficiency and greater product customisation. Compared to traditional industrial robots, cobots are often smaller in size, offering both versatility in various tasks and cost-efficiency. From a technological perspective, the use of cobots can lead to significant improvements in processes.

Cobots: a double-edged sword?

While the advantages of cobots are clear, from a human-centric perspective, a more nuanced conclusion is required. In reality, cobots can present both benefits and challenges for operators. Cobots can help reduce physical strain and mitigate repetitive tasks. On the other hand, cobots may also increase mental effort and working closely together with cobots could cause stress. Furthermore, depending on the workspace and task, working with cobots could affect an operator’s posture for better or worse. This complexity highlights the need for studies into the operator’s experiences of working alongside cobots.

The Discipline of Human Factors

Human Factors is a field dedicated to the study of interactions between humans, technologies, and their environments. This scientific discipline is crucial for enhancing the safety and efficiency of socio-technical systems through interdisciplinary research. Specifically, in the realm of human-cobot collaboration, the discipline of Human Factors plays a pivotal role. By integrating diverse research perspectives—from Robotics and Usability Engineering to Design and Psychology—this discipline enables researchers to dissect and understand complex interactions and complex systems. More importantly, it provides a framework for translating these insights into practical applications, offering concrete design recommendations and effective technology implementation strategies.

Beyond safety

While safety in Human-Robot Interaction has been a central point in Human Factors research, studies specifically addressing human-cobot collaboration are relatively new. Traditionally, much research was aimed at safeguarding the human operator, ensuring their physical safety. Nevertheless, if we aim to improve the overall system performance and well-being of operators, we need to consider additional factors, besides safety. For instance, cobots typically operate at lower speeds as a safety measure, however, experienced operators might prefer a faster pace depending on the task and context. This suggests that speed adjustments could be made without compromising safety.

Looking Forward

As the adoption of cobots continues to grow in industrial settings, it is crucial to deepen our understanding of the factors influencing human-cobot collaboration. Researchers in Human Factors can offer valuable insights by examining the diverse experiences of human operators in cobot-assisted tasks, considering individual differences, different kinds of tasks, various workspaces and cobot capabilities.

Ultimately, while cobots offer the potential to streamline processes, enhance customisation, and reduce costs, their implementation should also focus on improving human operators’ physical safety and mental health. These considerations emphasise the importance of adopting new technologies in genuinely advantageous ways, ensuring a balanced approach to innovation and worker well-being.

Stay Informed on Human Factors in Human-Robot Collaboration

If you’re interested in the latest advancements in human factors research within the field of Human-Robot Collaboration, make sure to follow the activities of Program 3.1 at the Australian Cobotics Centre. We conduct human-centred research using real-world case studies in partnership with industry leaders, focusing on the impact of human factors on operators in practical cobot applications. Our current projects include exploring cobot integration in manufacturing tasks and investigating human factors in robot-assisted surgeries.

Follow our progress on the Australian Cobotics Centre’s LinkedIn page for the latest updates and insights.

ARTICLE: Robotic Blended Sonification: Consequential Robot Sound as Creative Material for Human-Robot Interaction

This article is written by Stine S. Johansen, Jared Donovan, Markus Rittenbruch (Human-Robot-Interaction Program) at Australian Cobotics Centre, and Yanto Browning, Anthony Brumpton (QUT)

Abstract
Current research in robotic sounds generally focuses on either masking the consequential sound produced by the robot or on sonifying data about the robot to create a synthetic robot sound. We propose to capture, modify, and utilise rather than mask the sounds that robots are already producing. In short, this approach relies on capturing a robot’s sounds, processing them according to contextual information (e.g., collaborators’ proximity or particular work sequences), and playing back the modified sound. Previous research indicates the usefulness of non-semantic, and even mechanical, sounds as a communication tool for conveying robotic affect and function. Adding to this, this paper presents a novel approach which makes two key contributions: (1) a technique for real-time capture and processing of consequential robot sounds, and (2) an approach to explore these sounds through direct human-robot interaction. Drawing on methodologies from design, human-robot interaction, and creative practice, the resulting ‘Robotic Blended Sonification’ is a concept which transforms the consequential robot sounds into a creative material that can be explored artistically and within application-based studies.

Keywords
Robotics, Sound, Sonification, Human-Robot Collaboration, Participatory Art, Transdisciplinary

Introduction and Background
The use of sound as a communication technique for robots is an emerging topic of interest in the field of Human-Robot Interaction (HRI). Termed the “Robot Soundscape”, Robinson et al. mapped various contexts in which sound can play a role in HRI. This includes “sound uttered by robots, sound and music performed by robots, sound as background to HRI scenarios, sound associated with robot movement, and sound responsive to human actions” [7, p. 37]. As such, robot sound encompasses both semantic and non-semantic communication as well as the sounds that robots inherently produce through their mechanical configurations. With reference to product design research, the latter is often referred to as “consequential sound” [11]. This short paper investigates the research question: How can consequential robot sound be used as a material for creative exploration of sound in HRI?

This research offers two key contributions: (1) an approach to using, rather than masking [9], sounds directly produced by the robot in real-time, and (2) offering a way to explore those sounds through direct interactions with a robot. As an initial implication, this enables explorations of the sound through creative and open-ended prototyping. In the longer-term, this has the potential of leveraging and extending collaborators’ existing tacit knowledge about the sounds that mechanical systems make during particular task sequences as well as during normal operation versus breakdowns. Examples of using other communication modalities exist, mostly relying on visual feedback. Visual feedback allows collaborators to see, e.g., intended robotic trajectory and whether it is safe to move closer to the robot at any time. This assumes, however, that the human-robot collaboration follows a schedule in which the collaborator is aware of approximately when they can approach the robot. Sometimes, this timing is not possible to schedule, and collaborators must maintain visual focus on their task. This means that it is crucial to investigate ways of providing information about the robot’s task flow and appropriate timings for collaborative tasks. In other words, there is a need for non-visual feedback modalities that enable collaborators to switch between coexistence and collaboration with the robot. In order to achieve this aim, it is necessary to make these non-visual modalities of robot interaction available for exploration as creative ‘materials’ for prototyping new forms of human-robot interaction.

Prototyping sound design for social robots has received particular attention in prior research, e.g., movement sonification for social HRI [4]. However, this knowledge cannot be directly transferred when designing affective communication, including sound, for robots that are not anthropomorphic, e.g., mobile field robots, industrial robots for manufacturing, and other typical utilitarian robots [1]. In prior research of consequential robot sound, Moore et al. studied the sounds of robot servos and outlined a roadmap for research into “consequential sonic interaction design” [6]. The authors state that robot sound experiences are subjective and call for approaches that address this rather than, e.g., upgrade the quality of a servo to reduce noise objectively. Frid et al. also explored mechanical sounds of the Nao robot for movement sonification in social HRI [4]. They evaluated this through Amazon Mechanical Turk, where participants rated the sounds according to different perceptual measures Extending this into ways of modifying robot sounds, robotic sonification that conveys intent without requiring visual focus has been created by mapping movements in each degree of freedom for a robot arm to pitch and timbre [12]. The sound in that study, however, was created from sample motor sounds as opposed to the actual and real time consequential sounds of the robot. Another way this has been investigated is with video of a moving robot, Fetch, overlaid with either mechanical, harmonic, and musical sound to communicate the robot’s inner workings and movement [8]. This previous research indicates that people can identify nuances of robotic sounds but has yet to address if that is also the case for real time consequential robot sounds.

Robotic Blended Sonification
Robot sound has received increasing interest throughout the past decade, particularly for designing sounds uttered or performed by robots, background sound, sonification, or masking consequential robot sound [9]. Extending this previous research, we contribute with a novel approach to utilising and designing with consequential robot sound. Our approach for ‘Robotic Blended Sonification’ bridges prior research on consequential sound, movement sonification, and sound that is responsive to human actions. Furthermore, it relies on the real-time sounds of the robot as opposed to pre-made recordings that are subsequently aligned to movements. A challenge for selecting the sounds a robot could make is that people have a strong set of pre-existing associations between robots and certain kinds of sounds. On one hand, this might provide a basis for helping people to interpret an intended meaning or signal from a sound (e.g., a danger signal), but it also risks that robot sounds remain cliched (beeps and boops), and may ultimately limit the creative potentials for robotic sound design. In this sense, Robotic Blended Sonification is an appealing approach because it offers the possibility of developing a sonic palette grounded in the physical reality of the robot, while also allowing for aspects of these sounds to be amplified, attenuated, or manipulated to create new meanings. Blended sonification has previously been described as “the process of manipulating physical interaction sounds or environmental sounds in such a way that the resulting sound signal carries additional information of interest while the formed auditory gestalt is still perceived as coherent auditory event” [10]. As such, it is an approach to augment existing sounds for purposes such as conveying information to people indirectly.

To achieve real-time robotic blended sonification, we use a series of electromagnetic field microphones placed at key articulation points on the robot. Our current setup uses a Universal Robots UR10 collaborative robotic arm. The recorded signals are amplified and sent to a Digital Audio Workstation (DAW), where they can be blended with sampled and synthesized elements and processed in distinct ways to create interactive soundscapes. Simultaneously to the real-time capture of the robot’s audio signals, we enable direct interactions with the robot through the Grasshopper programming environment within Rhinoceros 3D (Rhino) and the RobotExMachina bridge and Grasshopper plugin [3]. We capture the real-time pose of the robot’s Tool Center Point (TCP) in Grasshopper. Interaction is made possible via the Open Sound Control (OSC) protocol, with the Grasshopper programming environment sending a series of OSC values for the TCP. The real-time positional data also includes the pitch, roll, and yaw of each section of the robotic arm. Interaction with the robot arm is enabled through the Fologram plugin for Grasshopper and Rhino. The virtual robot is anchored to the position of the physical robot. The distance between the base of the robot and a smartphone is then calculated and used to direct the TCP towards the collaborator. This enables realtime interaction for exploring sounds for different motions and speeds. For our prototype, OSC messages from the robotic movements are received in the Ableton Live DAW, along with the Max/MSP programming environment, and then assigned to distinct parameters of digital signal processing tools to alter elements of the soundscape. The plan for the initial prototype setup is to use five discrete speakers: A quadraphonic
setup to allow for 360 degree coverage in a small installation space, along with a point source speaker located at the base of the robotic arm. The number of speakers is scalable to the size of the installation space and intent of the installation. The point source speaker alone is enough to gather data on the effects of robotic blended sonification on HRI, while multi-speaker configurations allow for better coverage in larger environments, enable investigations for non-dyadic human-robot interactions, and provide more creative options when it comes to designing soundscapes.

Directions for Future Research
Ways of using non-musical instruments for musical expressions have a long history within sound and music art. Early examples include the work of John Cage, e.g., Child of Tree (1975) where a solo percussionist performs with electrically amplified plant materials [2], or the more recent concert Inner Out (2015) by Nicola Giannini where melting ice blocks are turned into percussive elements [5]. In a similar manner, our approach enables performance with robotic sound, subsequently allowing for a creative exploration of how those sounds affect and could be utilised for better human-robot collaborations. With the proposed approach, we identify new immediate avenues for research in the form of the following research questions:

Robot Sound as Creative Material
In what ways can the consequential sound of a robot be used as a creative material in explorations of robot sound design? This can entail investigations through different configurations, including dyadic and non-dyadic interactions, levels of human-robot proximity, and different spatial arrangements. Furthermore, the interaction itself will play a crucial part in the way that the sound is both created and experienced, e.g., whether a collaborator is touching the robot physically or, as in our current setup, is interacting on a distance.

Processing Consequential Robot Sound
In what ways can or should we process the consequential sound material? Two key points are connected to this. First, the consequential sound forms a basis for the resulting sound output which can be modified in various ways. Future research can entail exploring these, including the fact that different robots produce different consequential sounds that subsequently, will lead to different meaningful modifications. Second, our approach can be complemented by capturing data from the surrounding environment to use as input for sound processing.

Engaging People in Reflection
How can we prompt people’s reflections about consequential robot sounds through direct interaction? While prior research has demonstrated ways to investigate consequential robot sound, e.g., through overlaying video with mechanical sounds, our approach enables people to explore sounds that result from their own interactions with a robot. This can be utilised for both structured and unstructured setups, depending on the purpose of the investigation. In our current setup, we invite for artistic exploration and expression. For more utilitarian purposes, the setup can be created in the context within which a robot is or could be present. This could support other existing methods for mapping and designing interventions into soundscapes.

Conclusion
In this short paper, we have described a novel approach for exploring and prototyping with consequential robot sound. This approach extends prior research by providing a technique for capturing, processing, and reproducing sounds in real-time during collaborators’ interactions with the robot.

Acknowledgments
This research is jointly funded through the Australian Research Council Industrial Transformation Training Centre (ITTC) for Collaborative Robotics in Advanced Manufacturing under grant IC200100001 and the QUT Centre for Robotics.

References
[1] Bethel, C. L., and Murphy, R. R. 2006. Auditory and other non-verbal expressions of affect for robots. In AAAI fall symposium: aurally informed performance, 1–5.
[2] Cage, J. 1975. Child of Tree. Peters Edition EP 66685. https://www.johncage.org/pp/ John-Cage-Work-Detail.cfm?work_ID=40.
[3] del Castello, G. 2023. RobotExMachina. GitHub repository. https://github.com/RobotExMachina.
[4] Frid, E.; Bresin, R.; and Alexanderson, S. 2018. Perception of mechanical sounds inherent to expressive gestures of a nao robot-implications for movement sonification of humanoids.
[5] Giannini, N. 2015. Inner Out. Nicola Giannini. https://www.nicolagiannini.com/ portfolio/inner-out-2/.
[6] Moore, D.; Tennent, H.; Martelaro, N.; and Ju, W. 2017. Making noise intentional: A study of servo sound perception. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’17, 12–21. New York, NY, USA: Association for Computing Machinery.
[7] Robinson, F. A.; Bown, O.; and Velonaki, M. 2023. The robot soundscape. In Cultural Robotics: Social Robots and Their Emergent Cultural Ecologies. Springer. 35–65.
[8] Robinson, F. A.; Velonaki, M.; and Bown, O. 2021. Smooth operator: Tuning robot perception through artificial movement sound. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’21, 53–62. New York, NY, USA: Association for Computing Machinery.
[9] Trovato, G.; Paredes, R.; Balvin, J.; Cuellar, F.; Thomsen, N. B.; Bech, S.; and Tan, Z.-H. 2018. The sound or silence: investigating the influence of robot noise on proxemics. In 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN), 713–718. IEEE.
[10] Tunnermann, R.; Hammerschmidt, J.; and Hermann, T. ¨ 2013. Blended sonification: Sonification for casual interaction. In ICAD 2013-Proceedings of the International Conference on Auditory Display.
[11] Van Egmond, R. 2008. The experience of product sounds. In Product experience. Elsevier. 69–89.
[12] Zahray, L.; Savery, R.; Syrkett, L.; and Weinberg, G. 2020. Robot gesture sonification to enhance awareness of robot status and enjoyment of interaction. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO MAN), 978–985. IEEE.

Author Biographies
* Stine S. Johansen is a Postdoctoral Research Fellow in the Australian Cobotics Centre. Her research focuses on designing interactions with and visualisations of complex cyberphysical systems.
* Yanto Browning is Lecturer at Queensland University of Technology in music and interactive technologies, with extensive experience as audio engineer.
* Anthony Brumpton is artist academic working in the field of Aural Scenography. He likes the sounds of birds more than planes, but thinks there is a place for both.
* Jared Donovan is Associate Professor at Queensland University of Technology. His research focuses on finding better ways for people to be able to interact with new interactive technologies in their work, currently focusing on the design of robotics to improve manufacturing.
* Markus Rittenbruch, Professor of Interaction Design at Queensland University of Technology, specialises in the participatory design of collaborative technologies. His research also explores designerly approaches to study how collaborative robots can better support people in work settings.

ARTICLE: Reflections from the 2023 OZCHI workshop on Empowering People in Human-Robot Collaboration

This article is written by Stine Johansen, Postdoctoral Research Fellow (Human-Robot-Interaction Program) at Australian Cobotics Centre.

 

At the OzCHI 2023 conference, researchers from the Australian Cobotics Centre (QUT and UTS) and CINTEL (CSIRO) co-organised a workshop on the topic of “Empowering People in Human-Robot Collaboration: Why, How, When, and for Whom”. Our previous workshop at the OzCHI 2022 conference showed that there is a growing interest in the area from both researchers and practitioners located in the regions of Oceania. In the 2022 workshop, discussions centred around human roles in human-robot collaboration, empathy for robots, approaches to designing and evaluating human-robot collaboration, and ethical considerations. With the 2023 workshop, we aimed to take a step further by (1) discussing underlying assumptions that shape our research and (2) identifying pathways towards shared visions for future research. While it is impossible to capture all the nuances of our discussions here, I will use the limited space in this article to provide a peek into two of the topics that emerged. I hope this can serve as an inspiration to anyone who is reflecting on the why, when, how, and who of empowering people in human-robot collaboration.

Topic 1: Robots as tools for creativity

While an increasing number of digital tools to support creative work come into the world, there are still questions left to be answered in terms of how that support can or should be designed. While a robot might aid someone in drawing, 3D printing, milling furniture, etc, it is up to people to ask the right kinds of questions for artistic expressions and experiences. Furthermore, while a robot might be able to manipulate physical materials, the processes of moulding, cutting, drawing, painting, etc., is part of an artistic conversation that artists and creative professionals have with those materials. Workshop participants proposed that there is a potential for further empirical studies of how creativity works as a basis for how robots can support that.

There are a number of examples out there where designers, developers, and artists explore roles that robots can play for creative work. Here are some that I have come across:

Youtuber and artist Jazza tried to evaluate the drawing capabilities of a small desk robot by line-us. The video starts with a highly unsuccessful replication of Jazza’s drawings and moves into an interactive game session, e.g., playing hangman. It seems that replicating an artist’s drawings is a fun gimmick but perhaps does not offer any further space for creativity. (See the video here)

The humanoid robot Ai-Da paints “self”-portraits which seems ironic when a robot inherently does not have a self or an identity—at least from the perspective of current understandings of consciousness. The artist, Aidan Meller, states that the point of Ai-Da is to raise questions around what role people have if robots are able to replicate our work. (The Guardian published this article about Ai-Da in 2021)

By the way, on the topic of robot consciousness, our workshop panel member Associate Professor Christoph Bartneck, University of Canterbury, hosts a podcast in which the topic was discussed. You can listen to the episode here.

In a more academic direction, the MIT Media Lab has conducted research on ways that robots can help children be creative. They designed a set of games that support children either through demonstrating how to implement a creative idea or by prompting children to reflect by, e.g., asking them questions. (Read about the research here)

Topic 2: Assumptions about robots

Even though, much research and development has already shown a multitude of ways that robots can perform tasks in work and everyday life, there are still underlying assumptions about robots and people that drive these developments. The phrases we use between ourselves, participants, collaborators, industry partners, etc, to describe a design concept or how a robot could solve a problem are part of a larger storytelling. Such storytelling comes through narratives of, e.g., robots taking jobs from workers. We might ask ourselves how we contribute to these narratives, both in public forums as well as research publications.

As a side note to this, fiction and ‘speculation’ is increasingly utilised as a tool for designing human-robot interaction. Some examples include Auger, 2014, Luria et al., 2020, and Grafström et al., 2022. Speculative design is not a new method, but rather becoming a well-established approach within human-computer interaction (HCI), interaction design, and now also human-robot interaction.

What are our visions and how can we get there?

Our shared visions for the future of human-robot collaboration are not necessarily surprising, but thankfully reassuring, that collaborative robots should support people. There are, however, a multitude of ways that people can be supported. These range from support (1) during an actual task, e.g., heavy lifting, improving work safety, and providing effective communication, (2) by fitting into dynamic and unstructured environments, and (3) as part of the foundation for people to have a healthy and rewarding work life.

Different pathways exist towards making this reality. Here are a few examples taken from the workshop discussion. First, while the Australasian context might present some unique challenges, we can still learn from other parts of the world, e.g., in terms of socio-economic pressures that drive robotic development. Second, we can continuously reframe the problems we choose to prioritise. There are perhaps opportunities to move away from the framing of robots performing “dull, dirty, and dangerous” work to robots performing collaborative, inclusive, and even creative work. Third, increasingly dynamic settings require robotic interfaces that provide modular solutions. This prompts the question of how end users might use modular robotic systems, and whether this approach is best suited for certain problems and contexts. Finally, participants agreed that we increasingly need a network of researchers in this area to support each other.

In the spirit of the last point, I invite researchers and practitioners to visit the Australian Cobotics Centre at QUT, Brisbane. You are also welcome to join our public seminars, both as audience and presenter. I look forward to continuing this crucial conversation.

References

James Auger. 2014. Living with robots: a speculative design approach. J. Hum.-Robot Interact. 3, 1 (February 2014), 20–42. https://doi.org/10.5898/JHRI.3.1.Auger

Anna Grafström, Moa Holmgren, Simon Linge, Tomas Lagerberg, and Mohammad Obaid. 2022. A Speculative Design Approach to Investigate Interactions for an Assistant Robot Cleaner in Food Plants. In Adjunct Proceedings of the 2022 Nordic Human-Computer Interaction Conference (NordiCHI ’22). Association for Computing Machinery, New York, NY, USA, Article 50, 1–5. https://doi.org/10.1145/3547522.3547682

Michal Luria, Ophir Sheriff, Marian Boo, Jodi Forlizzi, and Amit Zoran. 2020. Destruction, Catharsis, and Emotional Release in Human-Robot Interaction. J. Hum.-Robot Interact. 9, 4, Article 22 (December 2020), 19 pages. https://doi.org/10.1145/3385007

Online links

Jazza trying the line-us robot:

https://www.youtube.com/watch?v=oZYqrPnpDoY

Article about Ai-Da:

https://www.theguardian.com/culture/2021/may/18/some-people-feel-threatened-face-to-face-with-ai-da-the-robot-artist

MIT Media Lab projects on child-robot interaction for creativity:

https://www.media.mit.edu/projects/creativity-robots/overview/

Christoph Bartneck’s podcast episode on robot consciousness:

https://open.spotify.com/episode/5sFNVXTiv9Sh3u360DlZFy?si=808266bb27ea4b73