Member Login

Australian Cobotics Centre Annual Symposium 2025: A Celebration of Collaboration and Impact

Australian Cobotics Centre Annual Symposium 2025: A Celebration of Collaboration and Impact

The Australian Cobotics Centre’s annual symposium brought together researchers, HDRs, postdocs, and industry partners for three dynamic days of reflection, knowledge-sharing, and hands-on learning. This year’s event highlighted the incredible progress made in 2025 and set the stage for an impactful final year in 2026.

Day 1: Reflection and Connection

We kicked off the symposium by welcoming teams from QUT, Swinburne University of Technology, and University of Technology Sydney to Brisbane. Day 1 focused on reviewing achievements and planning for the future.

Each of our five research programs—presented by postdocs Sheila Sutjipto, Dr Valeria Macalupú, Alan Burden, Mariadas Capsran Roshan, and Melinda Laundon—shared key milestones from 2025 and outlined priorities for the Centre’s final year. Annual program reviews followed, providing a valuable opportunity to:

  • Highlight areas of excellence
  • Share good practices
  • Identify opportunities to enhance impact

The conversations reinforced the strength of our research programs and our commitment to continuous improvement.

Day 2: Cobots in Action – Industry Workshops

Day 2 shifted the spotlight to industry, with five hands-on workshops designed to connect research outcomes with real-world applications. Read more here: Cobots in Action Workshops – 27th November 

These workshops were a fantastic opportunity to translate research into practical tools for industry. A huge thank you to everyone who joined us! We’re already planning more workshops for 2026—stay tuned.

Day 3: Sharing Expertise and Building Connections

The final day celebrated the incredible contributions of our HDRs and postdocs through skill-building, discussion, and collaboration.

Skill-Building Workshop
Dr Valeria Macalupú delivered an inspiring session on visually presenting research positioning, equipping participants with tools to map and communicate their research focus.

Panel Discussions
Postdocs facilitated thought-provoking conversations with HDRs on topics such as:

  • What impact has the ACC made?
  • Is the human-robot workforce ready?
  • The HDR journey
  • Should we call robots collaborative?

BarCamp Conversations
Dynamic, informal discussions enabled participants to explore shared interests and plan collaborations for 2026 and beyond.

🤝 HDR Forum
A dedicated space for HDRs to connect, share experiences, and discuss what they need for the next 12 months.

Day 3 truly showcased the power of collaboration and the depth of expertise within our group. Thank you to everyone who contributed!

Looking Ahead

2026 marks the final year of the Australian Cobotics Centre, and we won’t be slowing down. Expect more workshops, research showcases, and opportunities to engage with our work as we continue to shape the future of collaborative robotics.

2025 OzCHI Conference

Many of our team were in Sydney last week for OzCHI: Australian Conference on Computer-Human Interaction.

Our Deputy Director, Prof Glenda Caldwell, delivered a thought-provoking Provocation Talk titled: Beyond the Lab: Preparing HCI for Real-World Human-Robot Collaboration.

The group had several papers accepted for Late Breaking Work:

Success at the QUT Vice Chancellors Awards for Excellence

We’re thrilled for Professor Jonathan Roberts, Director of the Australian Cobotics Centre and Professor in Robotics at QUT (Queensland University of Technology), who has been awarded the QUT Vice-Chancellor’s Award for Excellence in Leadership!

These awards recognise staff who deliver exceptional outcomes, and Jon exemplifies this through his inclusive, supportive leadership. As well as leading the Australian Cobotics Centre, Jon goes above and beyond to mentor Engineering students, foster cross-disciplinary and cross-university collaborations, and has championed humanoid robotics research at QUT. He promotes an atmosphere where people feel confident to try new approaches, knowing that every experience whether a success or setback, drives growth, creativity and new opportunities.

A special mention also goes to others in our QUT ACC Team for their wins:

Read more about the 2025 winners here: QUT – Vice-Chancellor’s Awards for Excellence

 

oplus_32

 

ARTICLE: From Gut Feel to Evidence: Making the Case for Technology Adoption Through Quality Economics

By Munia Ahamed, UTS PhD Researcher, Australian Cobotics Centre

Technology adoption decisions in manufacturing are often characterised by a tension between perceived opportunity and perceived risk. This is particularly true for small and medium enterprises considering investments in collaborative robotics and automated quality systems, where the upfront costs are concrete, but the returns can feel uncertain.

Research consistently identifies this uncertainty as a key barrier. A 2024 UK government review of advanced technology adoption found that financial barriers—particularly difficulties justifying investment decisions due to uncertain returns—ranked among the most frequently cited obstacles for manufacturers [1], [2], [3]. Similar patterns emerge in studies of Australian SMEs, where decision-makers report hesitancy in embracing new technologies despite recognising their potential benefits.
This article argues that one way to address this challenge is through a more rigorous application of quality cost economics—a well-established body of theory that provides frameworks for quantifying the costs of defects, rework, and quality failures. By grounding technology adoption decisions in these frameworks, manufacturers can move from intuition-based decision-making toward evidence-based investment analysis.

The Economics of Quality: Theoretical Foundations
The concept of quality costing has a long history in operations management. Joseph Juran introduced the notion of the “cost of poor quality” in his 1951 Quality Control Handbook, arguing that organisations inevitably pay for quality—either through prevention and detection, or through the consequences of failure. His work established that appraisal and failure costs are typically much higher than prevention costs, suggesting that investment in getting things right the first time yields significant returns.

Philip Crosby extended this thinking in the 1970s and 1980s with his influential argument that “quality is free”. Crosby’s position was not that quality improvement carries no cost, but rather that the price of nonconformance—scrap, rework, warranty claims, lost customers—far exceeds the price of conformance. His research suggested that well-run quality programs could yield gains of 20 to 25 percent of revenues, with the cost of nonconformance reducible by half within 12 to 18 months of systematic effort.

Armand Feigenbaum’s Prevention-Appraisal-Failure (PAF) model provides a useful taxonomy for categorising quality costs. Prevention costs include activities designed to avoid defects occurring in the first place—process design, training, quality planning. Appraisal costs cover inspection, testing, and measurement activities. Failure costs, both internal (scrap, rework) and external (warranty, returns, reputation damage), represent the consequences of quality problems that were not prevented or detected.

In practical terms, this means that spending more on preventing defects upfront usually reduces the overall cost of quality problems later. Failure costs tend to decrease faster than prevention costs increase, so the total quality cost goes down. This insight remains as relevant today as when it was first articulated, and it provides a theoretical basis for evaluating investments in quality-enhancing technologies.

The Economics of Detection Timing
A related concept concerns the timing of defect detection. The 1:10:100 rule, attributed to George Labovitz and Yu Sang Chang (1992), captures the exponential escalation of costs as defects progress through the value chain. In its simplest form, the rule suggests that addressing a problem at its source costs one unit; finding and correcting it later in the process costs ten units; and dealing with it after it reaches the customer costs one hundred units.

While the specific ratios vary by context, the underlying principle is well-supported: defects that escape early detection accumulate additional processing costs, and defects that reach customers incur costs that extend beyond direct remediation to include relationship damage, complaint handling, and potential regulatory consequences.

This principle has direct relevance to technology adoption decisions. Automated inspection systems and vision-guided collaborative robots do not merely accelerate quality checking—they fundamentally alter when inspection occurs. Real-time, in-process detection catches problems before they accumulate downstream costs, shifting the organisation’s quality cost profile in favourable directions.

Barriers to Evidence-Based Decision Making
If the theoretical case for quality investment is strong, why do manufacturers—particularly SMEs—struggle to act on it? The literature identifies several contributing factors.
First, quality costs are often poorly measured. While direct costs like scrap and rework may be tracked, hidden costs—inspection time, schedule disruption, expedited shipping to replace defective goods—frequently go unrecorded. Without accurate baseline data, it becomes difficult to project returns on quality-enhancing investments.

Second, uncertainty about technology performance creates decision paralysis. Studies of SME technology adoption consistently find that decision-makers hesitate when they cannot point to demonstrated results in comparable contexts. This creates a circular problem: evidence is needed to justify investment, but evidence comes from having invested.

Third, competing priorities and resource constraints mean that quality investments must compete with other demands on limited capital. In this environment, investments with uncertain or difficult-to-quantify returns tend to be deferred in favour of more immediately tangible needs.

ROI Calculators as Analytical Tools

One response to these challenges is the development of structured ROI calculators tailored to specific technology investments. When well-designed, such tools serve several functions beyond simply generating a payback estimate.

First, they impose discipline on baseline measurement. To complete the calculator, users must quantify current defect rates, rework costs, and inspection time—data that many organisations have not systematically collected. The process of gathering this information often yields insights independent of any technology decision.

Second, they make assumptions explicit. A good ROI model does not obscure uncertainty; it surfaces it. Users can see what improvement rates are assumed, what cost factors are included, and how sensitive the conclusions are to different inputs. This transparency supports more informed discussion among stakeholders.

Third, they provide a framework for comparing alternatives. By standardising how costs and benefits are categorised, calculators enable like-for-like comparison of different technology options or implementation approaches.

The value of such tools lies not in their precision—all projections involve uncertainty—but in their capacity to structure thinking and ground decisions in operational data rather than vendor claims or general optimism.

Practical Recommendations

For manufacturers seeking to apply quality cost economics and the 1:10:100 principle to their technology decisions, several practical steps can strengthen the quality of investment analysis.
Establish quality cost baselines. Before evaluating any technology investment, spend time measuring what is currently unmeasured: rework hours, scrap rates, inspection time, defect escape rates. Even approximate figures provide a foundation for analysis that intuition cannot.

Map defect origins and detection points. Understanding where in the process problems arise—and where they are currently caught—identifies the opportunities for earlier detection. The gap between origin and detection represents accumulated cost that prevention or earlier inspection could avoid.
Use sensitivity analysis. Rather than seeking a single ROI figure, explore how conclusions change under different assumptions. What defect reduction would be needed for the investment to break even? How does the payback period shift if improvement is 20% less than projected? This approach acknowledges uncertainty while still supporting decision-making.

Consider pilot implementations. Where full-scale investment feels premature, smaller-scale trials with defined metrics can generate context-specific evidence. This reduces risk while building organisational capability and confidence.

The Path Forward
The theoretical foundations for quality cost analysis are well-established, with decades of research supporting the economic logic of prevention over detection and early detection over late. What is often lacking is the practical application of these frameworks to specific technology adoption decisions.
ROI calculators, when grounded in quality economics and used as analytical tools rather than sales devices, can help bridge this gap. They provide a structured means of translating established theory into operational decision-making, replacing intuition with evidence and making the case for investment in terms that resonate with resource-constrained decision-makers.

For Australian manufacturing to remain globally competitive, we need to accelerate thoughtful adoption of collaborative robotics and quality automation. Fact-based decision tools are one contribution toward that goal.

We welcome discussion on this topic. How has your organisation approached the challenge of justifying technology investments? What frameworks or tools have proven useful?

References
[1] Make UK and RSM UK, Investment Monitor 2024: Using Data to Drive Manufacturing Productivity. London, UK: Make UK, 2024.
[2] Make UK and BDO LLP, Manufacturing Outlook: 2024 Quarter 4. London, UK: Make UK, 2024.
[3] UK Government, Invest 2035: The UK’s Modern Industrial Strategy — Green Paper. London, UK: HM Government, Oct. 2024.

 

27th Australian Conference on Robotics and Automation (ACRA)

The 27th Australian Conference on Robotics and Automation (ACRA) was held at Edith Cowan University in Western Australia from 1-3rd December, bringing together leading researchers and industry innovators. The Australian Cobotics Centre proudly contributed five accepted papers, highlighting advances in collaborative robotics, humanoid systems, and AI-driven decision-making:

  • Solomonoff-Inspired Hypothesis Ranking with LLMs for Prediction Under Uncertainty
    Authors: Josh Barber, Rourke Young, Cameron Coombe, Will Browne (QUT)
  • User Preference for Handle Force Transformation for Quadruped Guide Robot
    Authors: Luke Bouttell, Marc Carmichael, Sarath Kodagoda (UTS)
  • Robots Watching Robots – Pose Estimation Applied to Humanoid Robots
    Authors: James Matthew Young, Jonathan Roberts (QUT)
  • Big Robots, Small Tasks – Task Scale Reduction Using a Simple Pantograph Mechanism
    Authors: Sean Burgess, Braydon Pithie, Jonathan Roberts (QUT)
  • Training Humanoid Robots to Walk in Lunar Gravity Using Reinforcement Learning
    Authors: Benjamin Klein, Jonathan Roberts (QUT)

The conference concluded with a highlight for the Centre: Professor Will Browne was announced as the new President of the Australian Robotics and Automation (ARAA) Association.