Member Login

ARTICLE: Enhancing Collaboration Between Humans and Robots: The Critical Role of Human Factors Research

This article is written by Jasper Vermeulen, PhD researcher at the Australian Cobotics Centre.

 

Integrating collaborative robots (cobots) in factory environments offers substantial benefits for businesses, including increased operational efficiency and greater product customisation. Compared to traditional industrial robots, cobots are often smaller in size, offering both versatility in various tasks and cost-efficiency. From a technological perspective, the use of cobots can lead to significant improvements in processes.

Cobots: a double-edged sword?

While the advantages of cobots are clear, from a human-centric perspective, a more nuanced conclusion is required. In reality, cobots can present both benefits and challenges for operators. Cobots can help reduce physical strain and mitigate repetitive tasks. On the other hand, cobots may also increase mental effort and working closely together with cobots could cause stress. Furthermore, depending on the workspace and task, working with cobots could affect an operator’s posture for better or worse. This complexity highlights the need for studies into the operator’s experiences of working alongside cobots.

The Discipline of Human Factors

Human Factors is a field dedicated to the study of interactions between humans, technologies, and their environments. This scientific discipline is crucial for enhancing the safety and efficiency of socio-technical systems through interdisciplinary research. Specifically, in the realm of human-cobot collaboration, the discipline of Human Factors plays a pivotal role. By integrating diverse research perspectives—from Robotics and Usability Engineering to Design and Psychology—this discipline enables researchers to dissect and understand complex interactions and complex systems. More importantly, it provides a framework for translating these insights into practical applications, offering concrete design recommendations and effective technology implementation strategies.

Beyond safety

While safety in Human-Robot Interaction has been a central point in Human Factors research, studies specifically addressing human-cobot collaboration are relatively new. Traditionally, much research was aimed at safeguarding the human operator, ensuring their physical safety. Nevertheless, if we aim to improve the overall system performance and well-being of operators, we need to consider additional factors, besides safety. For instance, cobots typically operate at lower speeds as a safety measure, however, experienced operators might prefer a faster pace depending on the task and context. This suggests that speed adjustments could be made without compromising safety.

Looking Forward

As the adoption of cobots continues to grow in industrial settings, it is crucial to deepen our understanding of the factors influencing human-cobot collaboration. Researchers in Human Factors can offer valuable insights by examining the diverse experiences of human operators in cobot-assisted tasks, considering individual differences, different kinds of tasks, various workspaces and cobot capabilities.

Ultimately, while cobots offer the potential to streamline processes, enhance customisation, and reduce costs, their implementation should also focus on improving human operators’ physical safety and mental health. These considerations emphasise the importance of adopting new technologies in genuinely advantageous ways, ensuring a balanced approach to innovation and worker well-being.

Stay Informed on Human Factors in Human-Robot Collaboration

If you’re interested in the latest advancements in human factors research within the field of Human-Robot Collaboration, make sure to follow the activities of Program 3.1 at the Australian Cobotics Centre. We conduct human-centred research using real-world case studies in partnership with industry leaders, focusing on the impact of human factors on operators in practical cobot applications. Our current projects include exploring cobot integration in manufacturing tasks and investigating human factors in robot-assisted surgeries.

Follow our progress on the Australian Cobotics Centre’s LinkedIn page for the latest updates and insights.

ARTICLE: Robotic Blended Sonification: Consequential Robot Sound as Creative Material for Human-Robot Interaction

This article is written by Stine S. Johansen, Jared Donovan, Markus Rittenbruch (Human-Robot-Interaction Program) at Australian Cobotics Centre, and Yanto Browning, Anthony Brumpton (QUT)

Abstract
Current research in robotic sounds generally focuses on either masking the consequential sound produced by the robot or on sonifying data about the robot to create a synthetic robot sound. We propose to capture, modify, and utilise rather than mask the sounds that robots are already producing. In short, this approach relies on capturing a robot’s sounds, processing them according to contextual information (e.g., collaborators’ proximity or particular work sequences), and playing back the modified sound. Previous research indicates the usefulness of non-semantic, and even mechanical, sounds as a communication tool for conveying robotic affect and function. Adding to this, this paper presents a novel approach which makes two key contributions: (1) a technique for real-time capture and processing of consequential robot sounds, and (2) an approach to explore these sounds through direct human-robot interaction. Drawing on methodologies from design, human-robot interaction, and creative practice, the resulting ‘Robotic Blended Sonification’ is a concept which transforms the consequential robot sounds into a creative material that can be explored artistically and within application-based studies.

Keywords
Robotics, Sound, Sonification, Human-Robot Collaboration, Participatory Art, Transdisciplinary

Introduction and Background
The use of sound as a communication technique for robots is an emerging topic of interest in the field of Human-Robot Interaction (HRI). Termed the “Robot Soundscape”, Robinson et al. mapped various contexts in which sound can play a role in HRI. This includes “sound uttered by robots, sound and music performed by robots, sound as background to HRI scenarios, sound associated with robot movement, and sound responsive to human actions” [7, p. 37]. As such, robot sound encompasses both semantic and non-semantic communication as well as the sounds that robots inherently produce through their mechanical configurations. With reference to product design research, the latter is often referred to as “consequential sound” [11]. This short paper investigates the research question: How can consequential robot sound be used as a material for creative exploration of sound in HRI?

This research offers two key contributions: (1) an approach to using, rather than masking [9], sounds directly produced by the robot in real-time, and (2) offering a way to explore those sounds through direct interactions with a robot. As an initial implication, this enables explorations of the sound through creative and open-ended prototyping. In the longer-term, this has the potential of leveraging and extending collaborators’ existing tacit knowledge about the sounds that mechanical systems make during particular task sequences as well as during normal operation versus breakdowns. Examples of using other communication modalities exist, mostly relying on visual feedback. Visual feedback allows collaborators to see, e.g., intended robotic trajectory and whether it is safe to move closer to the robot at any time. This assumes, however, that the human-robot collaboration follows a schedule in which the collaborator is aware of approximately when they can approach the robot. Sometimes, this timing is not possible to schedule, and collaborators must maintain visual focus on their task. This means that it is crucial to investigate ways of providing information about the robot’s task flow and appropriate timings for collaborative tasks. In other words, there is a need for non-visual feedback modalities that enable collaborators to switch between coexistence and collaboration with the robot. In order to achieve this aim, it is necessary to make these non-visual modalities of robot interaction available for exploration as creative ‘materials’ for prototyping new forms of human-robot interaction.

Prototyping sound design for social robots has received particular attention in prior research, e.g., movement sonification for social HRI [4]. However, this knowledge cannot be directly transferred when designing affective communication, including sound, for robots that are not anthropomorphic, e.g., mobile field robots, industrial robots for manufacturing, and other typical utilitarian robots [1]. In prior research of consequential robot sound, Moore et al. studied the sounds of robot servos and outlined a roadmap for research into “consequential sonic interaction design” [6]. The authors state that robot sound experiences are subjective and call for approaches that address this rather than, e.g., upgrade the quality of a servo to reduce noise objectively. Frid et al. also explored mechanical sounds of the Nao robot for movement sonification in social HRI [4]. They evaluated this through Amazon Mechanical Turk, where participants rated the sounds according to different perceptual measures Extending this into ways of modifying robot sounds, robotic sonification that conveys intent without requiring visual focus has been created by mapping movements in each degree of freedom for a robot arm to pitch and timbre [12]. The sound in that study, however, was created from sample motor sounds as opposed to the actual and real time consequential sounds of the robot. Another way this has been investigated is with video of a moving robot, Fetch, overlaid with either mechanical, harmonic, and musical sound to communicate the robot’s inner workings and movement [8]. This previous research indicates that people can identify nuances of robotic sounds but has yet to address if that is also the case for real time consequential robot sounds.

Robotic Blended Sonification
Robot sound has received increasing interest throughout the past decade, particularly for designing sounds uttered or performed by robots, background sound, sonification, or masking consequential robot sound [9]. Extending this previous research, we contribute with a novel approach to utilising and designing with consequential robot sound. Our approach for ‘Robotic Blended Sonification’ bridges prior research on consequential sound, movement sonification, and sound that is responsive to human actions. Furthermore, it relies on the real-time sounds of the robot as opposed to pre-made recordings that are subsequently aligned to movements. A challenge for selecting the sounds a robot could make is that people have a strong set of pre-existing associations between robots and certain kinds of sounds. On one hand, this might provide a basis for helping people to interpret an intended meaning or signal from a sound (e.g., a danger signal), but it also risks that robot sounds remain cliched (beeps and boops), and may ultimately limit the creative potentials for robotic sound design. In this sense, Robotic Blended Sonification is an appealing approach because it offers the possibility of developing a sonic palette grounded in the physical reality of the robot, while also allowing for aspects of these sounds to be amplified, attenuated, or manipulated to create new meanings. Blended sonification has previously been described as “the process of manipulating physical interaction sounds or environmental sounds in such a way that the resulting sound signal carries additional information of interest while the formed auditory gestalt is still perceived as coherent auditory event” [10]. As such, it is an approach to augment existing sounds for purposes such as conveying information to people indirectly.

To achieve real-time robotic blended sonification, we use a series of electromagnetic field microphones placed at key articulation points on the robot. Our current setup uses a Universal Robots UR10 collaborative robotic arm. The recorded signals are amplified and sent to a Digital Audio Workstation (DAW), where they can be blended with sampled and synthesized elements and processed in distinct ways to create interactive soundscapes. Simultaneously to the real-time capture of the robot’s audio signals, we enable direct interactions with the robot through the Grasshopper programming environment within Rhinoceros 3D (Rhino) and the RobotExMachina bridge and Grasshopper plugin [3]. We capture the real-time pose of the robot’s Tool Center Point (TCP) in Grasshopper. Interaction is made possible via the Open Sound Control (OSC) protocol, with the Grasshopper programming environment sending a series of OSC values for the TCP. The real-time positional data also includes the pitch, roll, and yaw of each section of the robotic arm. Interaction with the robot arm is enabled through the Fologram plugin for Grasshopper and Rhino. The virtual robot is anchored to the position of the physical robot. The distance between the base of the robot and a smartphone is then calculated and used to direct the TCP towards the collaborator. This enables realtime interaction for exploring sounds for different motions and speeds. For our prototype, OSC messages from the robotic movements are received in the Ableton Live DAW, along with the Max/MSP programming environment, and then assigned to distinct parameters of digital signal processing tools to alter elements of the soundscape. The plan for the initial prototype setup is to use five discrete speakers: A quadraphonic
setup to allow for 360 degree coverage in a small installation space, along with a point source speaker located at the base of the robotic arm. The number of speakers is scalable to the size of the installation space and intent of the installation. The point source speaker alone is enough to gather data on the effects of robotic blended sonification on HRI, while multi-speaker configurations allow for better coverage in larger environments, enable investigations for non-dyadic human-robot interactions, and provide more creative options when it comes to designing soundscapes.

Directions for Future Research
Ways of using non-musical instruments for musical expressions have a long history within sound and music art. Early examples include the work of John Cage, e.g., Child of Tree (1975) where a solo percussionist performs with electrically amplified plant materials [2], or the more recent concert Inner Out (2015) by Nicola Giannini where melting ice blocks are turned into percussive elements [5]. In a similar manner, our approach enables performance with robotic sound, subsequently allowing for a creative exploration of how those sounds affect and could be utilised for better human-robot collaborations. With the proposed approach, we identify new immediate avenues for research in the form of the following research questions:

Robot Sound as Creative Material
In what ways can the consequential sound of a robot be used as a creative material in explorations of robot sound design? This can entail investigations through different configurations, including dyadic and non-dyadic interactions, levels of human-robot proximity, and different spatial arrangements. Furthermore, the interaction itself will play a crucial part in the way that the sound is both created and experienced, e.g., whether a collaborator is touching the robot physically or, as in our current setup, is interacting on a distance.

Processing Consequential Robot Sound
In what ways can or should we process the consequential sound material? Two key points are connected to this. First, the consequential sound forms a basis for the resulting sound output which can be modified in various ways. Future research can entail exploring these, including the fact that different robots produce different consequential sounds that subsequently, will lead to different meaningful modifications. Second, our approach can be complemented by capturing data from the surrounding environment to use as input for sound processing.

Engaging People in Reflection
How can we prompt people’s reflections about consequential robot sounds through direct interaction? While prior research has demonstrated ways to investigate consequential robot sound, e.g., through overlaying video with mechanical sounds, our approach enables people to explore sounds that result from their own interactions with a robot. This can be utilised for both structured and unstructured setups, depending on the purpose of the investigation. In our current setup, we invite for artistic exploration and expression. For more utilitarian purposes, the setup can be created in the context within which a robot is or could be present. This could support other existing methods for mapping and designing interventions into soundscapes.

Conclusion
In this short paper, we have described a novel approach for exploring and prototyping with consequential robot sound. This approach extends prior research by providing a technique for capturing, processing, and reproducing sounds in real-time during collaborators’ interactions with the robot.

Acknowledgments
This research is jointly funded through the Australian Research Council Industrial Transformation Training Centre (ITTC) for Collaborative Robotics in Advanced Manufacturing under grant IC200100001 and the QUT Centre for Robotics.

References
[1] Bethel, C. L., and Murphy, R. R. 2006. Auditory and other non-verbal expressions of affect for robots. In AAAI fall symposium: aurally informed performance, 1–5.
[2] Cage, J. 1975. Child of Tree. Peters Edition EP 66685. https://www.johncage.org/pp/ John-Cage-Work-Detail.cfm?work_ID=40.
[3] del Castello, G. 2023. RobotExMachina. GitHub repository. https://github.com/RobotExMachina.
[4] Frid, E.; Bresin, R.; and Alexanderson, S. 2018. Perception of mechanical sounds inherent to expressive gestures of a nao robot-implications for movement sonification of humanoids.
[5] Giannini, N. 2015. Inner Out. Nicola Giannini. https://www.nicolagiannini.com/ portfolio/inner-out-2/.
[6] Moore, D.; Tennent, H.; Martelaro, N.; and Ju, W. 2017. Making noise intentional: A study of servo sound perception. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’17, 12–21. New York, NY, USA: Association for Computing Machinery.
[7] Robinson, F. A.; Bown, O.; and Velonaki, M. 2023. The robot soundscape. In Cultural Robotics: Social Robots and Their Emergent Cultural Ecologies. Springer. 35–65.
[8] Robinson, F. A.; Velonaki, M.; and Bown, O. 2021. Smooth operator: Tuning robot perception through artificial movement sound. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’21, 53–62. New York, NY, USA: Association for Computing Machinery.
[9] Trovato, G.; Paredes, R.; Balvin, J.; Cuellar, F.; Thomsen, N. B.; Bech, S.; and Tan, Z.-H. 2018. The sound or silence: investigating the influence of robot noise on proxemics. In 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN), 713–718. IEEE.
[10] Tunnermann, R.; Hammerschmidt, J.; and Hermann, T. ¨ 2013. Blended sonification: Sonification for casual interaction. In ICAD 2013-Proceedings of the International Conference on Auditory Display.
[11] Van Egmond, R. 2008. The experience of product sounds. In Product experience. Elsevier. 69–89.
[12] Zahray, L.; Savery, R.; Syrkett, L.; and Weinberg, G. 2020. Robot gesture sonification to enhance awareness of robot status and enjoyment of interaction. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO MAN), 978–985. IEEE.

Author Biographies
* Stine S. Johansen is a Postdoctoral Research Fellow in the Australian Cobotics Centre. Her research focuses on designing interactions with and visualisations of complex cyberphysical systems.
* Yanto Browning is Lecturer at Queensland University of Technology in music and interactive technologies, with extensive experience as audio engineer.
* Anthony Brumpton is artist academic working in the field of Aural Scenography. He likes the sounds of birds more than planes, but thinks there is a place for both.
* Jared Donovan is Associate Professor at Queensland University of Technology. His research focuses on finding better ways for people to be able to interact with new interactive technologies in their work, currently focusing on the design of robotics to improve manufacturing.
* Markus Rittenbruch, Professor of Interaction Design at Queensland University of Technology, specialises in the participatory design of collaborative technologies. His research also explores designerly approaches to study how collaborative robots can better support people in work settings.

Meet our E.P.I.C. Researcher, James Dwyer

James Dwyer is a PhD researcher based at Queensland University of Technology and his project is part of the Human-Robot-Interaction Program at the Australian Cobotics Centre. We interviewed James recently to find out more about why he does what he does.

  • Tell us a bit about yourself and your research with the Centre?

Beginning my academic journey with a Bachelor of Psychology, I have always been deeply interested in human behaviour and the ways in which people communicate and connect with each other. After finishing this degree and entering the working world, I found a job in customer support, where I encountered firsthand the frustrations caused by poorly designed products and the impact technology has on people’s daily lives, sparking my interest in studying Industrial Design. Through this degree, I became interested in human-centred design as a way of approaching the design process to better meet user needs. My transition into design research, particularly focusing on HRI, was a natural progression, driven by a desire to blend my interests in people, technology, and prototyping as a research tool. At the Australian Cobotics Centre, my PhD research aims to develop an HRC prototyping toolkit that supports collaborative design approaches within manufacturing and surgical contexts. This work seeks to fill a crucial gap by integrating end-user needs and context of use into the design of cobots through collaborative design approaches. Working with industry partners Stryker and Cook Medical will be an invaluable part of this process, grounding my research in practical application and amplifying its impact. In the long term, I envision my research contributing significantly to the field by advancing HRC methodologies, promoting a more human-centred approach to HRC research, and broadening the scope of cobot applications across various industries.

Why did you decide to be a part of the Australian Cobotics Centre?

My decision to join the Australian Cobotics Centre was born from my interest in the Project 2.2 research topic, “Human-Robot Interaction (HRI) prototyping toolkit.” This topic aligned with my interest in prototyping as a research approach and engaging with end users through collaborative design processes, particularly in the context of HRI research. This project extends the work of my master’s research, where I developed a virtual reality-based HRI prototype. This experience was enlightening, presenting me with numerous ideas for future projects and highlighting the vast possibilities that Extended Reality (XR) technologies hold for prototyping and design research. This initial interest led me to look further into the Australian Cobotics Centre. What resonated with me about the Centre was its commitment to people-focused and innovative research. The Centre’s emphasis on collaboration between academia and industry also aligned with my belief in the importance of applied research that tackles real-world problems.

  • What project are you most proud of throughout your career and why?

One of the highlights of my career so far has been my involvement as the creative lead in the Soundline Project during my time with the ARS Electronica Futurelab Academy at QUT. The project explored how technology can facilitate group flow, transforming festival cues, lines and waiting areas, which might be considered an ‘error’ in the festival experiences, into playful and creative opportunities. Soundline was designed to allow people at any level of musical proficiency to contribute meaningfully to a collective soundscape through physical interaction with five different musical instruments. The instruments allowed participants to create music through physical movement, fostering a unique collaborative experience between the performers and audience members. This project is particularly dear to me for several reasons. It was a testament to the power of interdisciplinary collaboration, combining elements of design, technology, music, and dance performance. The hands-on experience of guiding the project from its initial concept to its public execution was also invaluable, teaching me the importance of an iterative design approach that integrated different perspectives and skill sets. Furthermore, the recognition of our work as a finalist for the IxDA Interaction Awards in 2019 was an affirming milestone, underscoring the project’s impact and the potential of interdisciplinary practice in design practice and technological development.

  • What do you hope the long-term impact of your work will be?

My research aims to address a critical gap in Human-Robot Interaction (HRI) by developing a prototyping toolkit that supports collaborative design approaches with end-users, a facet often overlooked in current practices. The need for a more human-centred approach in HRC research is evident, as current trends lean heavily towards technology-centric methods. The socio-cultural and socio-technical challenges presented by implementing cobots in manufacturing and surgical contexts necessitate a balance between technological advancements and human needs. By advocating for a design process that respects and integrates worker preferences and concerns, my work contributes to a more inclusive and considerate approach to cobot implementation, potentially leading to safer, more comfortable work environments. The development of a prototyping toolkit that encourages co-design could significantly enhance interdisciplinary collaboration, bridging gaps between fields such as engineering, social science, and design. By making the design process more inclusive and participatory, this toolkit has the potential to influence broader discussions on technology implementation, worker participation, and the ethical considerations of integrating robotics in the workplace.

  • Aside from your research, what topic could you give an hour-long presentation on with little to no preparation?

Beyond my academic pursuits, my passions are varied and deeply rooted in the explorative and speculative realms of science fiction and design and the creative pursuits of music and art. This fascination extends to the disciplined and intricate world of martial arts, where I have spent half my life trying to achieve some level of proficiency but have gained, at the very least, a great appreciation for the mental discipline the practice instils in my daily life. Music also holds a special place in my heart. While I still struggle with theory, the process of song construction and the communal experience of improvisation captivates me, providing a unique form of collective engagement and reflection. Art, too, is a refuge for me, albeit my sketches may not win awards. The immersive process of striving to capture the right expression or scene is a form of meditation, a way to lose myself in creativity. Similarly, the joy of understanding processes through the act of making or deconstructing complex ideas is a thread that runs through all my hobbies. While I might describe myself as a “jack of all trades and master of none”, this eclectic mix of interests is interconnected, each informing and enriching the others; these activities reflect and reinforce my approach to life and work, revealing my weaknesses and areas for growth. Rather than talking about these topics, however, I often find I am more interested in delving into the depths of someone else’s expertise. In this, I find myself in the role of the perpetual student, eager to absorb and understand more.

CONGRATULATIONS James – Confirmation of Candidature

We extend our congratulations to James Dwyer, our PhD researcher, for successfully completing his confirmation seminar on March 20th.

James’s thesis, titled “How Can We Design for Human-Robot Collaboration: the Need for a Human-Robot Collaboration Prototyping Toolkit,” is under the supervision of Jared DonovanMarkus RittenbruchStine Johansen and Rafael Gomez FDIA from QUT (Queensland University of Technology) and the review panel included Marianella Chamorro-Koc and Claire Brophy.

His project is dedicated to developing a human robot collaboration Prototyping Toolkit that integrates both physical and simulated robotic systems. This initiative aims to streamline the exploration, development, and testing of novel processes and work routines. Through a collaboration with industry partner Cook Medical, the research team will explore various prototyping techniques and utilise advanced technologies such as motion tracking, mixed-reality interfaces, and lightweight interactive components to safely explore new interaction concepts.

This innovative approach promises to equip designers, engineers, and end-users with the essential resources for enhancing future human-robot collaboration within the manufacturing landscape.

For more details about James’s project, please see: Project 2.2: Human Robotic Interaction prototyping toolkit » Australian Cobotics Centre | ARC funded ITTC for Collaborative Robotics in Advanced Manufacturing

Welcoming a new Industry Partner – Tafe Queensland

We are pleased to announce that TAFE Queensland has joined the Centre as an Industry Partner!

TAFE Queensland offers our Centre valuable insight into Queensland’s manufacturing businesses and their workforces. They are one of Australia’s largest education providers with 120,000+ students trained each year across the state, nationally and internationally.

We look forward to sharing more about our collaboration as the year progresses.

Meet our E.P.I.C. Researcher, Akash Brinly Hettiarachchi

Akash Hettiarachchi is a PhD researcher based at Queensland University of Technology and his project is part of the Human-Robot Workforce Program at the Australian Cobotics Centre.
His current research interests include diversified work groups, the attraction of different social groups to the advance manufacturing sector and overcoming work force gaps.

We interviewed Akash recently to find out more about why he does what he does.

  • Tell us a bit about yourself and your research with the Centre?

I have worked in the manufacturing industry for almost 20 years, specializing as an HR Professional for major global manufacturing companies. Throughout my career, I have gained invaluable experience working with diverse workforces across different regions and cultures. This experience has not only provided me with practical insights but also enhanced my theoretical knowledge in the field. I have had the privilege of collaborating with industry experts and witnessing firsthand the advancements in technology within the manufacturing industry. This exposure has fueled my passion for research and equipped me with a deeper understanding of how to contribute effectively to the industry.

Moving forward, I believe it is essential to forge strong partnerships with industry leaders and gain a thorough understanding of the practical implications of workforce development alongside technological advancements. By combining this practical knowledge with my diverse background, I can contribute to the manufacturing industry by formulating innovative business strategies that provide a competitive edge.

  • Why did you decide to be a part of the Australian Cobotics Centre?

The Australian Cobotic Centre stands out from other research groups by actively engaging with the industry and directly investigating pressing issues in the manufacturing sector. It also serves as a platform for researchers with diverse backgrounds to collaborate and share their findings with a wider audience. Additionally, maintaining ongoing connections with technical and HR experts opens up new research opportunities and fosters improved collaboration.

  • What project are you most proud of throughout your career and why?

Promoting diversity in the workforce is a project that is extremely important to me. Throughout my professional career, I have unfortunately encountered discrimination in various areas, including recruitment, selection, promotions, and training and development. This discrimination has been especially prevalent among underrepresented groups. Additionally, I have observed firsthand how managers perpetuate unaccepted behaviour towards minority groups, highlighting the urgent need for support from business leaders.

  • What do you hope the long-term impact of your work will be?

I intend to work with manufacturing organizations as a consultant, offering guidance on effectively managing the new generations and other underrepresented groups in the context of Industry 4.0. I also aim to blend traditional HR strategies with technological support and leverage technology to develop more inclusive HR strategies. The HR community I am currently connected with is an excellent platform for sharing my research findings and exploring new opportunities to enhance global HR strategies.

  • Aside from your research, what topic could you give an hour-long presentation on with little to no preparation?

As an HR Practitioner, I am most comfortable presenting on manufacturing-related HR strategies, problems faced by women in the manufacturing sector, and the impact of technology on the workforce. These topics require minimal preparation on my part.

CONGRATULATIONS Akash – Confirmation of Candidature

Massive congratulations to our PhD researcher, Akash Brinly Hettiarachchi who completed his confirmation seminar last week on Wednesday, 6 March!

 

His thesis is entitled: Cobots intervention for a diverse Australian manufacturing workforce. His supervisory team include A/Prof Penny Williams, QUT and Professor Greg Hearn, QUT and the review panel included A/Prof Erika French and A/Prof Jared Donovan.

His project addresses the existing labour shortage and facilitate sustainable growth in the manufacturing sector, it is imperative to explore potential solutions for attracting and retaining a diverse workforce. This research seeks to synergise technological solutions (Cobots) with HR strategies (workforce diversity) to address the prevalent challenge of talent scarcity within the manufacturing sector.

It will investigate avenues for incorporating human and social factors into the design of Cobots and assess how this integration can help overcome potential barriers to entry and retention for a diverse manufacturing workforce. The study will adopt a qualitative research methodology, encompassing three key stages: descriptive analysis, focus group discussions, and case study analysis.

New PhD Researcher, Zongyuan Zhang

We are pleased to welcome Zongyuan Zhang, our newest team member. Zongyuan is a PhD researcher at QUT (Queensland University of Technology), supervised by Jonathan Roberts, and will be actively involved in the Biomimic Cobots program as the lead researcher on Project 1.1: Cobot contact tasks through multi-sensory deep learning.

Zongyuan’s research interests centre around the application of deep learning in the field of robotics and the study of motion theories of robots with different configurations. He has experience in control system design and mechanical structure design, and has participated in projects including underwater photography robot, driverless racing car, exoskeleton mechanical arm, dual-rotor aircraft, and remote-control robotics arm, some of which are currently undergoing commercialisation.

???? We look forward to hearing more as Zongyuan’s project progresses!

Welcome Zongyuan!

Two papers accepted for ISEA 2024

Our researchers have two papers accepted to the International Symposium on Electronic Art (ISEA 2024) which will be held in Meanjin (Brisbane) from 21-29th June.

  • Robotic Blended Sonification: Consequential Robot Sound as Creative Material for Human-Robot Interaction, by Postdoctoral Research Fellow, Dr Stine Johansen from QUT (Queensland University of Technology) with co-authors Yanto Browning, Anthony Brumpton, Jared Donovan, Markus Rittenbruch.
  • Track Back: A Human Robot Movement Installation Utilising Unity Digital Twin and Human Bio-mimicry by Chief Investigator, Dr John McCormick from Swinburne University of Technology. As part of the Symposium, John will present an exhibition demonstration at UAP | Urban Art Projects.

Find out more: https://lnkd.in/gkXdKrAJ